Whether it's the increasingly challenging workload or you simply want to enhance your Node.js app's tolerance to failure and availability, there comes a time when you just need to scale it up, right? To “squeeze” the best performance out of your entire infrastructure of... nodes. Well then, here's how to scale your Node.js app:
And scaling up your web back-end app at different levels — overall improving its throughout — sure isn't an afterthought with Node.js:
Scalability is built in the very core of the runtime.
And the infrastructure of nodes, strategically distributed, communicating with each other, is what makes this framework particularly scalable.
So, what is the best way to scale up your Node.js app?
Which are the most powerful built-in tools for scalability to explore and to “exploit”? And what are the best strategies to go for depending on your specific scenario and scalable architecture needs?
Horizontally Scaling Your Node.js App
Horizontal scaling comes down to... duplicating:
Basically, you duplicate your application instance, enabling it to “cope with” a larger number of incoming connections.
Note: you can horizontally scale your Node.js app either across different machines or on a single multi-core machine.
A word of caution: do keep in mind, though, that this scaling solution might add up unnecessary complexity to your app's infrastructure; it might entail the need to provision and to maintain a load balancer, might make troubleshooting more challenging, and even change the way you deploy your app.
That being said: make sure that it's specifically this Node.js scaling solution that your project needs before you go ahead and implement it!
Vertical Scaling
If your scalability architecture needs involve nothing more than:
injecting more power
adding more memory
… with no particular “tweaking” applied to the code, then vertical scaling might just be the right answer to the “how to scale your Node.js app” dilemma.
Here's why:
by default, Node won't use more than 1.76GB of 64-bit machines' memory
in case of a 32GB of RAM machine, for instance, the Node process will limit itself to only a fraction of its memory
Have Multiple Processes Running on The Same Machine
Here's another possible answer to your “How to Scale your Node.js app” question:
Have multiple processes running on the same port.
It goes without saying that this scaling solution calls for some kind of internal load balancing that would distribute the incoming connections across the entire ecosystem of cores/processes.
Word of caution!
Not sure whether there's any need to add this: keep the number of running processes lower than that of the cores!
Hereinafter, let's focus on 2 Node.js built-in tools for scalability that you might want to tap into:
The Cluster Module
Node's cluster module makes a great starter for scaling up your application on a single machine.
How does it work precisely?
It makes setting up child processes sharing server ports conveniently easy.
Practically, one “master” process will be in charge with spawning all the child processes (and there's one “worker” for each core), those that actually run your Node.js app.
Feel free to dig here into more details on the whole process.
Yet, there are certain limitations to this basic scaling solution:
in case one of your child processes “dies”, it doesn't... regenerate itself
you'll need to handle the master-worker processes difference... the “old school way”, using an “if-else” block
there's no way of modifying multiple processes, at once, on-the-fly!
Note: yet, when it comes to the “dead child processes” drawback, there's... hope. For instance, use this piece of code that would enable the master process to... respawn the “worker”:
cluster.on('exit', (worker, code, signal) => {
cluster.fork();
});
And voila! This drawback has been taken off your list!
The PM2 Cluster Module
Using the PM2 cluster module, “how to scale your Node.js app” dilemma turns into:
“Lay back and let the PM2... clusterfy your server for you!”
All you need to do is “trigger” this command's superpower:
pm2 start app.js -i 4 –name="api"
It will instantly create a 4-node cluster for you!
Now, here are some more details about what's going on “under the hood” during this process:
the PM2 daemon will take over the ex “master process'” role and spawn N processes (the former “worker processes”) while relying on round-robin balancing
moreover, if it's PM2 process manager that you're using, your process gets automatically scaled across all the existing cores (no need to trigger the cluster module for that anymore)
also, the same PM2 process manager will ensure that processes restart, instantly, if they happen to crash
You'll just need to write your Node.js app as if it were for single-core usage and the PM2 module will make sure that it gets scaled for multi-core.
Note: now if you want to scale your Node.js application further, you might want to consider deploying more machines...
Scaling Across Multiple Machines with Network Load Balancing
The underlying process is more than similar to the “multiple core scaling” one, if you come to think of it:
Instead of several cores, you'll have several machines; each one will be running one or more processes and will get “backed up” by a load balancer redirecting traffic to each machine in this infrastructure.
“And how does a network balancer work, more precisely?” you might ask yourself:
Once a request is sent to a node, the balancer sends the traffic to a specific process.
And there are 2 ways of deploying your internal balancer:
deploy a machine and set up a network balancer yourself, using NGINX
use a managed load balancer (like Elastic Load Balancer); setting it up is conveniently easy and it “spoils” you with all kinds of built-in features, such as auto-scaling
Now if your “How to scale your Node.js app” question turns into a “Isn't it risky to have just one point of failure for my infrastructure?":
Just deploy multiple load balancers instead of relying on a single balancer.
They would be all pointing to the same servers, needless to add.
Note: for distributing traffic across your “ecosystem” of internal balancers, you could just add several DNS “A” records to your main domain.
How to Scale Your Node.js App: 3 Scaling Strategies to Consider
1. Decomposing
“Microservice” is another word for this scaling strategy. For practically you'll be “juggling” with multiple microservices (although their size is of no significant importance, actually).
Or multiple applications, with different codebases (and in many cases, each one of them has its own UI and dedicated database).
And it's by services and functionalities that you'll be decomposing/scaling your Node.js app. A strategy that can lead to unexpected issues in the long run, but which, if implemented correctly, translates into clear gains for your apps' performance.
2. Splitting
Or “horizontal partitioning” or “sharding”, if you prefer. This strategy involves splitting your app into multiple instances, each one responsible for a single, specific part of your app's data!
Word of caution: data partitioning calls for a lookup before you carry out each operation; this way you'll identify the right instance of the application to be used.
Take this example here:
You might want to partition your Node.js app's users by language or area of interest. In this case, a lookup step is a must; you'll need to check that information, first things first.
3. Cloning
And this is the easiest strategy at hand for solving your “How to scale your Node.js app” dilemma!
Just clone your Node.js back-end application, multiple times, and assign a specific part of the workload to each cloned instance!
It's both effective and cost-effective!
Moreover, Node's cluster module makes cloning on a single server ideally easy to implement!
And this is “How to scale your Node.js app”! See? You have not just one, but several Node.js built-in tools at hand and various strategies to choose from, depending on your scaling needs.
Which scaling solution suits you/your app project best?
RADU SIMILEANU / May 03'2018
Have no fear... Node.js 10 is here (since April 24, actually)! And, as expected, this version is planned to grow into the platform's official Long Term Support version (in October 2018); to be supported for 3 years after that date.
So? What's in it for you, the back-end web developer?
Are there any new features and improvements worth getting really excited about? Which are they and how precisely will they improve the overall developer experience.
Now before we take a deep dive into the “steamy fresh load” of new features, I feel like pointing out that:
it's mostly incremental improvements, applied throughout the entire codebase of the platform, that Node.js 10 ships with
… performance, reliability and stability-centered improvements, bubbling up to the back-end developer's experience
But let's name these improvements that ship with the new version of Node.js. Let's talk specific incremental changes, shall we?
10 of the “really worth getting excited about” ones:
1. Error-Handling Improvements
And error messages/error-handling improvements do make the majority of semver-major commits (approx. 300) that Node.js ships with.
It's a “pledge” made since Node.js 8.0.0 to assign static error codes to all Error objects:
“Error messages should be useful, more consistent and predictable”, this has been the “pledge” driving all the sustained efforts geared at improving error-handling.
Note: error codes have been included in Node.js 10, making constant error-checking conveniently easier!
2. Enhanced JavaScript Language Capabilities
There's an entire list of Node.js 10 language improvements (you can find them all here) worth exploring and... exploiting, I'll outline the highlights only:
you now get to use line and paragraph separator symbols (U+2028 and U+2029) in string literals, that match JSON
V8 “introduces”: String.prototype.trimEnd(), String.prototype.trim(), String.prototype.trimStart()
prototype.toString() returns the exact “pieces” of the source code text (comments and whitespace here included!)
the catch clause of the try statements no longer calls for a parameter
3. The Node.js fs (file system) Has Been Significantly Overhauled
And here are the most “dramatic” improvements made during this overhaul:
the type checking and error handling have been improved
the code got restructured, for easier maintainability
a new experimental fs/promises API got implemented, featuring first-class Promise-based API
Speaking of this new API, its “role” is that of generating a warning at runtime, the very first time that it gets used. Hopefully, things will turn out “bugs-free” so that it can grow from experimental to stable.
4. Node.js 10 Ships with Full Support for N-API
N-API — the ABI stable (Node.js) API for native modules — has leveled up to a stable version in Node.js 10.
What does this mean?
it provides a stable module API, one that is not influenced by the changes in Node.js's V8 JavaScript engine
the API layer makes upgrading a whole lot easier, streamlining production deployments and... easing module maintainers' lives
… and it goes without saying that this bubbles up to native modules' maintenance costs, as well
In short: say goodbye to module breakage!
5. The Assert Module: Explore Some Crucial Improvements
All efforts targetting the assert module have been aimed at easing the internal implementation and improving the developer experience.
But let me point out some of these improvements that eventually fleshed out and landed in Node.js 10:
a new “diff” view got implemented, for whenever assertion errors get generated
overall the output becomes more descriptive, more... “verbose” (and implicitly more useful)
better object comparisons
promises support
detailed error messages
6. Node.js 10 Ships With V8 6.6: Expect a Major Performance Boost
Get ready to “exploit” V8 6.6's range of performance improvements to their full potential! Along with its new set of JavaScript language features!
From them all, I can't but mention:
the async functions
the async generators
the promise execution
7. Cryptographic Support
Node.js 10 is the first version of the platform to include OpenSSL 1.x! And this can only translate into:
Enhanced protection for your priceless data!
Now, if I am to outline just 2 of the OpenSSL features to look forward tapping into, I should definitely mention:
the Polu1305 authenticator
the ChaCha 20 cipher
8. The Trace Events Mechanism: Monitoring Your Code's Performance Just Got Easier
That's right! Keeping a close eye on how your code's performing and being able to quickly diagnose any emerging issues is easier than ever with Node.js 10!
Basically, what these trace events do is enabling that all the diagnostic information output gets collected to a file accessible to the Chrome browsers DevTools utility.
No need to use a command-line flag anymore to trigger this whole trace events mechanism underlying Node.js.
And since we're here, let me point out to you 2 trace events-related improvements worth getting (really) excited about:
the node.perf.usertiming category got added — its role is that of capturing, in the trace events timelines, all the Performance API user timer marks and measures.
the JavaScript API got implemented, as well; enabling/disabling trace events dynamically is now possible in Node.js:
const trace_events = require('trace_events')
const tracing = trace_events.createTracing({
categories: ['node.async_hooks', 'v8']
})
tracing.enable()
// do stuff
tracing.disable()
9. HTTP and HTTP/2 Improvements
Another thing to get excited about, when it comes to Node.js 10's release, is given by all the incremental improvements made to HTTP and HTTP/2.
Let me detail a bit:
when it comes to HTTP, the changes applied range from improved Streams API compatibility to stricter standards support, to improve header and error handling
now when it comes to HTTP/2, significant progress has been made for getting it the closest to “stable mode” as possible before Node.js 10 reaches its Long Terms Support cycle. And I'm talking here about improvements made to the way trailing headers requests and responses get implemented and about overall improvements of the internal implementation and the public API
10. Node.js Ships With The Experimental Node-ChakraCore
And how does this impact the developer experience? Your experience?
using the JavaScript engine to its full potential
tapping into the Time Travel debugging
… gets a whole lot easier for you. You're practically enabled to detect errors way before they even get to “infest” your code.
The END! This is how our list of 10 Node.js 10 features worth getting (really) excited about looks like! Do explore them and start your preparations for moving over to this new version of Node.js before October!
RADU SIMILEANU / May 02'2018
What's the deal with the virtual DOM? How React virtual DOM works precisely? It's significantly faster, without question, and it brings a whole series of benefits to coding.
How come?
Which efficiency issues of the “real” DOM does it solve? And what makes the way that React.js manipulates the DOM better than the “standard” way?
Let's get you some answers:
But First: What Is the DOM Anyway?
"Document Object Model."
It's only but natural that, before we get into details on React and the Virtual DOM, we gain a deep understanding of the DOM itself.
Therefore, here's a definition that hopefully sheds enough light on this concept:
DOM is a tree-structured abstraction of (or an in-memory representation, if you prefer) a page's HTML code. One that preserves the parent/child relationships between the nodes within its tree-like structure.
Any better?
The major benefit is the API that it provides, that allows us, developers, to easily scan through the HTML elements of a page and to manipulate them as needed. For instance:
to add new nodes
to edit a given node's content
to remove specific nodes
And What Is DOM Manipulation More Precisely?
It's the very process that enables the content on any of your website's pages to be dynamically updated.
Needless to add that it's JavaScript that you would use when handling the DOM. Also, methods such as:
removeChild
getElementByID
… are included in the API that the “actual” DOM provides you with.
What Efficiency Challenges Does the "Real" DOM Face?
Now, before we go back to your initial “dilemma” (“how React Virtual DOM works”), let's see why a “virtual” DOM was even needed in the first place.
What efficiency issues of the “real” DOM does it address?
So, it's JavaScript that we use as we manipulate the DOM, right? And it used to work fantastic back in the days when static UIs would “rule” and the concept of dynamically updating nodes wasn't yet... “invented”.
Well, since then things have changed...
The DOM manipulation, once the core process of all modern interactive web pages, started to show its limitations. And that because the “real” DOM would update a “target” node along with the entire web page (with its corresponding layout and CSS).
For instance, imagine that:
You have a list of items and it's just one of those items that you need to update. Traditionally, the “real” DOM would re-render the entire list and not exclusively the items that receive updates. See?
Just think of a scenario where you have an SPA (Single Page App). One with thousands of dynamically generated nodes, that would all need to “listen to” lots of future updates and to re-render them in the UI.
It's here that things get discouragingly... slow!
The real DOM can't cope with pages carrying thousands and thousands of components to be re-rendered when updates are being passed through.
It's in this context here that the virtual DOM stepped in! And it's React that makes the most of it.
Clear enough?
How React Virtual DOM Works: Snapshots, Diffing and Reconciliation
Before we get into the “how”, let's shed some light on the “what”. What is the “virtual” DOM?
A light-weight abstraction/copy of the HTML DOM, having the same properties as the “real” one. The only difference is that it can't write to the screen like the actual DOM “can”. Also, it's local to React.
A copy of the actual DOM that you get to update “intensively” without impacting the real DOM.
Note: do keep in mind that it isn't React that introduced this concept since there are plenty of other libraries who're using it.
Snapshots, Diffing and Reconciliation
Now, let's get into details on how React virtual DOM works.
a. First of all, React takes a virtual DOM snapshot before doing any updates.
b. It will then use it (this record of the DOM state) to compare it against the updated virtual DOM, before applying any changes to the actual DOM itself.
And it's a “diffing algorithm” that supports all this comparing and enables React to identify any changes. To detect the updates that have been applied.
Also, the entire process is called “reconciliation”:
Whenever updates need to be made to the actual DOM, React updates the Virtual DOM first, and then, once it has done its compairing, it syncs the Real DOM.
In other words: before applying any of the requested updates, React makes a copy of the virtual DOM, that it will then set against the updated virtual DOM (diffing). It's during this diffing-reconciliation process that React detects the changes that have been applied and identifies the objects to be updated.
And it's precisely those objects that it will update in the actual DOM.
The huge benefits?
virtual DOM updates a whole lot faster
it updates exclusively the “target” nodes, leaving the rest ones of the page alone
Summing Up
To recap, let's try and sum up this whole “How React Virtual DOM Works” guide here to its bare essentials.
So, here's how React updates the DOM in 3 simple steps:
first, it applies the given updates to the whole Virtual DOM
then, it compares it with the snapshot of the virtual DOM that it will have taken, using an algorithm called “diffing” during this whole process of comparing and spotting any changes/contrasts
then, it's specifically (and exclusively) those changed elements that it updates in the actual DOM
The END! Have I managed to make this process any clearer for you? Can you now see what's “under the hood” of the way React updates DOM?
And the specific reasons why it's so much faster than the real DOM manipulation?
RADU SIMILEANU / Apr 26'2018
Whether you're "constrained" to migrate content to Drupal 8 or you're just eager to jump on the Drupal 8 bandwagon and harness its much-talked-about advanced features, the most important “warning/advice” to keep in mind is:
Don't migrate mindlessly!
Meaning that before you even get to the point of:
triggering the Migrate module's capabilities and adjusting them to your migration project's needs and requirements
selecting and combining all the needed contrib modules
writing down your YAML files for carrying out your content migration process
You'll need to think through every little aspect involved in/impacted by this process:
your goals
your growth plan
your current site visitors' complaints and suggestions
That being said, here's more of a “backbone” or summary of the migration workflow, one that highlights the:
main phases to go through
the right approach to the whole process
Drupal-specific concepts and tools to use
Do NOT expect a very detailed, highly technical tutorial, though!
As for the Drupal concepts that you'll need to be already (more than) familiarized with once you launch your migration process, maybe you want to have a look at this guide here, on Understanding Drupal
And now, let's delve in:
1. The Migration Workflow: 4 Key Phases to Consider
Here's the entire process in 4 steps (so you know what to expect):
first, you'll need to migrate your data into the destination nodes, files and paragraphs on the newly built Drupal 8 site
then you'll migrate data into date, image, taxonomy, address fields and file
next, you'll move your precious data from JSON and CVS files
and finally, you'll complete your migrations from the UI and the terminal
2. Are You Upgrading from Drupal 6 or 7 or Migrating From a Different System?
And here's what to expect depending on your answer to the above question:
if you migrate content to Drupal 8 from an older version of Drupal (6 or 7), then you're quite “spoiled”: a lot of hard work has been done, by the Drupal community, for turning this migration process into the official path to Drupal 8; you could say that the solid framework has already been set up, so all there's left for you to do is to... take advantage of it!
if it's from a whole different system that you're migrating your site (let's say WordPress or maybe Joomla), then... expect it to be a bit more challenging. Not impossible, yet more complex
3. Plan Everything in Detail: Think Everything Through!
Now with the risk of sounding awfully annoying and repetitive, I feel like stressing this out:
Don't migrate... mindlessly!
Plan everything in the smallest detail. Re-evaluate the content on your current site and its “load” of features.
Take the time to define your clear goals and to put together your growth plan (if there's any).
Then, do lend ear to what your current site visitors have to say, filter through all their complaints and suggestions and tailor your final decisions accordingly.
It's only then that you can go ahead and set up your content architecture.
4. Start With the Structure: Build Your Drupal 8 Site First
“But I haven't picked a theme yet!” you might be thinking.
No need to! Not at this stage of the migration process.
You can still build your Drupal 8, from the ground up, even without a theme ready to be used. You can add it later on, once you have the final version of your content!
But the site itself, its solid structure, this is a “must do”. It's the very foundation of all your next operations included in your migration workflow!
5. Deep Clean & Declutter! Take Time to Audit Your Content
Don't underrate this very step! For moving over all that clutter, that heavy load of unused, outdated features and all those chaotic, crummy pages will only impact your Drupal 8 site's performance from the start.
So, now it's the right time to do some... deep cleaning!
Audit your content, your features, plugins and other functionalities included in your site's infrastructure and... trim it down by:
relevance (are you using it?)
quality: keyword-stuffed, unstructured pages (a heavy pile of them) will surely not give your new Drupal 8 site any significant jumpstart in rankings!
6. About the Migration Module Included in Drupal 8 Core
Using this dedicated module in Drupal core to migrate content to Drupal 8 comes down to implementing the:
Extract- Transform-Load process
Or simply: ETL.
In Drupal — as related to the Drupal migrate module — these 3 operations come under different names:
the source plugin stands for “extract”
the process plugin stands for “transform”
the destination plugin stands for “load”
7. Time to... Migrate Content to Drupal 8 Now!
Now it's time to put some order into that “pile” of content of yours! To neatly structure Google Sheets, XML files, CVS files etc.
And here's the whole “structuring process” summed up to the 3 above-mentioned plugins: source, process and destination.
Source:
XML file
SQL database
Google Sheet
CVS file
JSON file
Process:
iterator
default_value
migration_lookup
concat
get
Destination:
images
users
paragraphs
nodes
files
And here's a specific example of how to “glue” data for a neater and ideally structured content architecture:
Before the migration:
A: First Name- Kevin
B: Last Name: Thomson
C: Department- Commerce
After Migration:
A: Name- Kevin Thomson
B: Department- Commerce
8. 4 Contrib Modules to Incorporate Into Your Migration Workflow
As already mentioned, the migrate content to Drupal 8 process also involves using a combination of contrib modules.
Speaking of which, allow me to get them listed here:
Migrate Tools
Migrate Source CVS
Migrate Spreadsheet
Migrate Plus
The END! This is the tutorial on how to migrate content to Drupal 8 trimmed down to its bare essentials.
To its core phases, key steps to take, main Drupal concepts to “joggle with”, right approach/mindset to adopt and best tools/modules to leverage for a smooth process!
Any questions?
RADU SIMILEANU / Apr 24'2018
So, you've got so used to that IT support guy who's been “stopping by” your workplace on a regular basis, for a few years now. Or with that tech support guy, from your IT services provider, who's been... supporting you and your team, on-site, for some time now. You've met his family, he hasn't missed any of the company's Xmas parties... So, why should you even consider switching to this new managed IT services support model?
How cutting down his visits could actually mean improved technical support services?
And you can't even say that you're grappling with the pros and cons, since you can't really put a finger on any “pros” for upgrading to this new type of technical support.
So, let's talk... benefits then! 5 of the strongest ones, actually:
Now and Then: The Managed IT Services Support Model vs The Legacy Model
Now let's have a look at the 2 most common situations where you'd leverage the “traditional” way of providing/getting IT support:
A. There's that technical consultant that pays his/her pre-scheduled visits to your workplace — who also comes when you call to report a sudden technical problem — and... “saves the day”.
B. There's an engineer from your IT services provider “glued” to his/her server, who provides your team with in-house technical support and preventive maintenance.
Now, let's have a look at our second scenario, where you will have already upgraded to the managed IT services model :
By leveraging a whole suite of remote monitoring and management (RMM) tools, your engineer provides you with technical support and maintenance right from his... service desk.
And, in many cases, it's him who'll alert you of emerging issues, before you even get to detect them yourself.
The 3 main benefits deriving from this new model?
there's a fixed-price engagement (instead of the pay per hour-based partnership, where you get invoiced for every “additional” intervention or extra hours)
remote monitoring and management tools guarantee you a more effective preventive and maintenance program
reactive work is no longer done on-site, but in a more timely manner, right from your provider's central location
That being said, let's get straight to the 5 clincher arguments for moving to the managed IT services support model:
1. From “Need More RAM!” Discussions to a Digital Transformation Strategy
Or, to put it into a more down to earth language:
Your conversations with your service provider will no longer be centered exclusively on how to address this or that technical problem, but rather on the measures to implement for digitally transforming your business.
For standing out in an increasingly competitive landscape, keeping up with the neck-breaking speed of disruption in your industry (across all industries).
It will no longer come down to putting together that standard list of issues and technical hurdles to be shown to your designated engineer next time he visits you.
Instead, you'll get to engage in real conversations with a... strategic consultant this time.
And your conversations will no longer focus on repair and maintenance, but on the initiatives to take for digitally transforming your business (translating into more relevant products for your customers).
Not on whether you need more... RAM or not to boost your workstation's performance.
2. Managed IT Services Support = Custom Scripting & Proactive Repairing
Does this scenario here seem (too) familiar to you?
You, the user, detect a technical issue... you give the service desk a call right away... a call that will then result in a... ticket for resolution.
It's only afterward, after some time, that someone at your service provider's central place will take notice of this ticket (depending on their own schedule and staff availability). And eventually come to your “rescue”.
A discouragingly toilsome process, don't you think?
Now, here's how things would play out if you embraced the managed services support model:
Leveraging a proactive repair approach, your service provider — thanks to the used RMM tools — will automatically detect any suspicious issue. Then, it runs its custom script/procedures to get it fixed.
And all this without you, the user, even noticing that there had been a problem in the first place.
Custom procedures (custom script) can be deployed either proactively or reactively, keeping this unwanted event's impact on your day-to-day business operations to a minimum.
Now, how's that for a change?
Compared to the :
Issue tracking... alerting the service desk... having a ticket created for it and... waiting for this issue to be finally tackled
… kind of process?
3. Take the Worries of Managing Onsite IT Support Staff Off Your Back
That's right, moving to managed IT services support means that:
there'll be no need for an in-house technical support team to be sent over by your provider
you'll no longer need to assign staff management tasks to someone in your team (paid time off, benefits, salary etc), regarding these technicians sent over to your workplace
also, you'll no longer need to manage their task list
“Outsourcing” is the best one-word definition for the managed IT services model!
Onsite support, along with all its inconveniences and staff management extra work, turns into off-site, “invisible” support.
Where issues get tackled from a distance before they even become… visible to you!
4. Simplify Your Auditing & Inventory Processes and Automate... Everything
Now you do agree that if:
you don't aim for high efficiency for carrying out your business processes
and you don't make “automate all the things” your ultimate goal
... you can't even be talking about digitally transforming your business.
About turning disruption in your industry from a challenge into a success story (your own).
Luckily, conveniently automated processes is what the managed IT services support model is based on.
Let me give you just a few examples of how the right RMM tools will help your service provider and benefit you, the client/user:
patching operating systems and apps get automated
managing the anti-malware gets automated
monitoring (and altering) your system's health gets automated
software deployments get automated
Hands-on support will be kept to a bare minimum since most of the operations can be handled right from your service provider's central place... automatically.
And there's more! Along with automation comes... simplification.
This means that by using a whole suite of digital tools, your service provider will be able to access any device in your workplace... digitally.
No need to have someone from their technical support staff come over for an inventory of issues and technical hurdles (and to eventually add/remove issues listed there).
This will simplify processes such as:
asset allocation
warranty renewals
hardware lifecycle management
software license tracking
and implicitly: your regular auditing and budgeting processes
5. A Fixed Price Engagement Motivates Your Provider to Get Prevention-Focused
Let's see if this example of a “pay per hour-based” engagement, along with its limitations and inconveniences (for you) resonates with you:
A hardware failure has just struck or you're facing ransomware. What do you do? Well, you call the “emergency team”. Which do come, but... at a cost. One by the hour.
Now, as compared to this type of contract, the managed IT services support is based on a fixed budget. So, no matter what unexpected events might occur, your provider will address them, right away, at no additional cost.
Needless to add that it's precisely this fixed fee model that motivates them to prevent those issues to occur in the first place. So, they'll adopt a prevention-oriented approach.
Resulting in a win-win situation for both of you!
And this fixed-fee engagement, along with the prevention-focused mindset, will inevitably entail a whole new world of... benefits for you:
you'll no longer need to limit yourself to the level of technical competency of that particular IT support guy sent to you on a prescheduled visit; instead, you'll get to tap into a wider range of expertise, right from your provider's own workplace
you'll no longer have to wait for that prescheduled visit or to expect to get invoiced differently for any intervention that's “outside” the schedule
The END! What do you think now: is it worth it to move to this whole new way of providing/receiving IT support?
RADU SIMILEANU / Apr 19'2018
Feeling stuck? Can't seem to put a finger on at least a few clear differences between PHPStorm and WebStorm? And you need to choose the most suitable IDE software for web development?
There sure must be some strong differences, other than:
PHPStorm doesn't provide JavaScript-oriented plugin support right out-of-the-box like WebStorm does.
Now, before we go “hunting” some key differences between PHPStorm and WebStorm, I'd like to add one last recommendation to consider when you select the right IDE for you:
It all comes down to evaluating various solutions and identifying not THE BEST, but the application that's perfectly suited to your specific needs.
That being said, without further ado, let's evaluate the “candidates”!
I'll be highlighting their key features (all while outlining the key differences between them) while you set them against your business requirements and specific feature needs, OK?
First of all: A Few Words About PHPStorm and WebStorm
Both IDE software products (Integrated Development Environment) are built on top of JetBrains IntelliJ platform. And geared at web development.
This has to be the most concise, yet comprehensive shared definition of our two “candidates” here. Let's move on to putting the spotlight on each of them, in turn...
PHPStorm: Key Features
If I am to turn a text definition into a mathematical formula, it would have to be something like this:
WebStorm + Database support + WebStorm = PhpStorm
Or, if I am to stick to a “conventional”, a standard text definition, it would go something like this:
PHPStorm incorporates all the functionality that WebStorm comes equipped with (CSS, JavaScript HTML), PLUS full-fledged PHP support (and databases support).
Also, sticking to the very purpose of this blog post — pointing out the key differences between PHPStorm and WebStorm — I should add that PHPStorm doesn't support JS like WebStorm does.
It doesn't provide built-in support for JavaScript plugins like its “competitor” does.
Now when it comes to its main functionalities:
start PHP code editor
HTML & CSS editor
Code navigation
JavaScript editor
Code quality analysis
Database & SQL
Debugging
Smart PHP code editor
Testing
Intelligent coding assistance
As for the integrations that PHPStorm supports, here are the most notable ones:
some of the most popular PHP test frameworks: Behat, Codeception, PHPUnit, PHPSpec
Composer Dependency Manager; this way you get to manage your project's dependencies right from the IDE
the webpack module bundler
React; it's perfectly equipped to assist you in linting, debugging, editing, running and configuring your apps
various compilers: Less, CSS, Sass, SCSS
Angular (Angular 2); it streamlines the process of building your desktop, web or mobile applications
WebStorm: Top Features
As already mentioned here: WebStorm “spoils” you, right out of the box, with support for JavaScript-oriented plugins.
Whereas, if you opt for PHPStorm, you'll need to install the needed JS plugins manually for achieving specific functionality.
And now, returning to its top features, here are just a few:
Extensive Navigation & Search capabilities
Support for React Native, PhoneGap, Cordova, Ionic and Node.js.
Unified UI for working with many popular Version Control Systems
Coding assistance for JavaScript and compiled-to-JavaScript languages, HTML, Node.js and CSS
Built-in debugger
Code quality tools
Built on top of the open-source IntelliJ Platform
Advanced coding assistance for Vue.js, React, Angular and Meteor
Spy-js tool for tracking JavaScript code
Simple unified UI for running Gulp, Grunt or npm tasks right from the IDE
… and the list of key features and tempting functionalities goes on.
Now another one of its main strengths, besides its built-in JavaScript-centered functionality, is given by all the integrations that it supports:
Spring
AcquiaMicrosoft
Google
Acquia
… a “detail” you sure don't want to underrate if you just consider the time and effort you'd be saving when working with an IDE that supports multiple integrations.
It will streamline the transfer of information between teams and services and cut down the valuable time otherwise invested in migrating from one software to another.
Choose WebStorm If...
... you're a front-end, JavaScript developer or, better said:
A “hardcore” one, depending on robust HTML, JavaScript and CSS-oriented features, such as JSUnit or Node.JS.
Go With PHPStorm If...
... you're having trouble choosing between PHPStorm and WebStorm, the most obvious proof that the first IDE (PHPStorm) is the one for you is the following:
You're a full stack back-end developer
And so, your work depends greatly on specific features, such as refactoring PHP code and built-in debuggers.
Final Word: Differences Between PHPStorm and WebStorm
It goes without saying that there's no such thing as IDE software ideally equipped to meet ALL your requirements.
Basically, when deciding between PHPStorm and WebStorm:
defining your specific needs (JavaScript-oriented or PHP-centered) is the very first thing to do
going for the IDE that integrates well with other programs is something that you'll need to consider, given the benefits that derive from there
So, have you got your answer yet? Judging from these key differences between PHPStorm and WebStorm, which one caters to your specific requirements?
RADU SIMILEANU / Apr 10'2018
With popularity comes trouble... In this case here meaning: security vulnerabilities and risky over-exposure to cyber threats. And this can only mean that securing your website, that's running on the currently third most popular CMS in the world, calls for a set of Drupal security best practices for you to adopt.
And to stick to!
There's no other way around it: a set of strategically chosen security measures, backed by a prevention-focused mindset, pave the shortest path to top security.
Stay assured: I've selected not just THE most effective best practices for you to consider adopting, but the easiest to implement ones, as well.
Quick note: before I go on and knee-deep into this Drupal security checklist, I feel like highlighting that:
Drupal still has a low vulnerability percentage rate compared to its market share
the majority of Drupal's vulnerabilities (46%) are generated by cross-site scripting (XSS)
And now, here are the tips, techniques, and resources for you to tap into and harden your Drupal site's security shield with.
1. The Proper Configuration Is Required to Secure Your Drupal Database
Consider enforcing some security measures at your Drupal database level, as well.
It won't take you more than a few minutes and the security dangers that you'll be safeguarding it from are massive.
Here are some basic, yet effective measures you could implement:
go for a different table prefix; this will only make it trickier for an intruder to track it down, thus preventing possible SQL injection attacks
change its name to a less obvious, harder to guess one
Note: for changing your table prefix you can either navigate to phpMyAdmin, if you already have your Drupal site installed, or do it right on the setup screen (if it's just now that you're installing your website).
2. Always Run The Latest Version of Drupal on Your Website
And this is the least you could do, with a significant negative impact on your Drupal site if you undermine its importance. If you neglect your updating routine.
Do keep in mind that:
it's older versions of Drupal that hackers usually target (since they're more vulnerable)
the regularly released updates are precisely those bug fixes and new security hardening features that are crucial for patching your site's vulnerabilities.
Why should you leave it recklessly exposed? Running on an outdated Drupal version, packed with untrusted Drupal modules and themes?
Especially since keeping it up to date means nothing more than integrating 2 basic Drupal security best practices into your site securing “routine”:
always download your themes and modules from the Drupal repository (or well-known companies)
regularly check if there are any new updates for you to install: “Reports” → “Available Updates”→“Check manually”
3. Make a Habit of Backing Up Your Website
And here's another one of those underrated and too often neglected Drupal security best practices!
Why should you wait for a ransomware attack and realize its true importance... “the hard way”?
Instead, make a habit of regularly backing up your website since, as already mentioned:
There's no such thing as perfection when it comes to securing a Drupal site, there's only a hierarchy of different “security levels” that you can activate on your site
And backing up your site, constantly, sure stands for one of the most effective measures you could apply for hardening your Drupal website.
Now, here's how you do it:
make use of Pantheon's “one-click backup” functionality
test your updates locally using MAMP or XAMPP or another “kindred” software
harness the Backup and Migrate module's power, currently available only for Drupal 7
export your MySQL database and back up your files “the old way”... manually
There, now you can stay assured that, if/when trouble strikes, you always have your backup(s) to retrieve your data from and get back “on your feet” in no time!
4. Block Those Bots That You're Unwillingly Sharing Your Bandwidth With
No need to get all “altruist” when it comes to your bandwidth!
And to share it with all kinds of scrappers, bad bots, crawlers.
Instead, consider blocking their access to your bandwidth right from your server.
Here's how:
Add the following code to your .htacces file and block multiple user-agent files at once:
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} ^.*(agent1|Wget|Catall Spider).*$ [NC]
RewriteRule .* - [F,L]
Or use the BrowserMatchNoCase directive as follows:
BrowserMatchNoCase “agent1” bots
BrowserMatchNoCase "Wget" bots
BrowserMatchNoCase "Catall Spider" bots
Order Allow,Deny
Allow from ALL
Deny from env=bots
Use the KeyCDN feature for preventing those malicious bots from stealing your bandwidth!
5. Use Strong Passwords Only: One of the Easiest to Implement Drupal Security Best Practices
More often than not “easy” doesn't mean “less efficient”.
And in this particular case here, simply opting for a strong username (smarter than the standard “admin”) and password can make the difference between a vulnerable and a hard-to-hack Drupal site.
For this, just:
Manually change your credentials right from your admin dashboard: “People” → “Edit”→ “Username” while relying on a strong password-generating program ( KeePassX or KeePass)
6. Use an SSL Certificate: Secure All Sensitive Data and Login Credentials
Would you knowingly risk your users' sensitive data? Their card information let's say, if it's an e-commerce Drupal site that you own?
And how about your login credentials?
For this is what you'd be doing if — though you do recognize the importance of using an SSL certificate — you'd still put this measure at the back of your list of Drupal security best practices.
In other words, running your site on HTTPs (preferably on HTTP/2, considering all the performance benefits that it comes packaged with) you'll be:
encrypting all sensitive data that's being passed on, back and forth, between the server and the client
encrypting login credentials, instead of just letting them get sent, in crystal-clear text, over the internet.
7. Use Drupal Security Modules to Harden Your Site's Shield
For they sure make your most reliable allies when it comes to tracking down loopholes in your site's code or preventing brutal cyber attacks.
From:
scanning vulnerabilities
to monitoring DNS changes
blocking malicious networks
identifying the files where changes have been applied
… and so on, these Drupal modules will be “in charge” of every single aspect of your site's security strategy.
And supercharging your site with some of the most powerful Drupal security modules is, again, the easiest, yet most effective measure you could possibly enforce.
Now speaking of these powerful modules, here's a short selection of the “must-have” ones:
Password Policy: enables you to enforce certain rules when it comes to setting up new passwords (you even get to define the frequency of password changes)
Coder : runs in-depth checks, setting your code against Drupal's best practices and coding standards
Automated Logout: as an admin, you get to define the time limit for a user's session; he/she will get automatically logged out when the time expires
SpamSpan Filter: enables you to obfuscate email addresses, thus preventing spambots from “stealing” them
Login Security: deny access by ID address and limit the number of login attempts
Content Access: grant permission to certain content types by user roles and authors
Hacked!: provides an easy way for you to check whether any new changes have been applied to Drupal core/themes
Security Review Module: it will check your website for those easy-to-make mistakes that could easily turn into security vulnerabilities; here's a preview of this module “at work”
8. Implement HTTP Security Headers
Another one of those too-easy-to-implement, yet highly effective Drupal security best practices to add to your Drupal security checklist:
Implementing (and updating) HTTP security headers
“Why bother?”
Cause:
first of all, their implementation requires nothing more than a configuration change at the web server level
their key role is letting the browsers know just how to handle your site's content
… thus reducing the risk of security vulnerabilities and brute force attacks
9. Properly Secure File Permissions
Ensure that your file permissions for:
opening
reading
modifying them
… aren't too dangerously loose.
Since such negligence could easily turn into an invitation for “evil-minded” intruders!
And it's on Drupal.org's dedicated page that you can find more valuable info on this apparently insignificant, yet extremely effective security measure
10. Restrict Access To Critical Files
Told you this was going to be a list of exclusively easy-to-implement Drupal security best practices.
Blocking access to sensitive files on your website (the upgrade.php file, the install.php file, the authorize.php file etc.) won't take you more than a few minutes.
But the danger you'd avoid — having a malicious intruder risking to access core files on your Drupal site — is way too significant to overlook.
END of the list! These are probably the easiest steps to take for securing your Drupal site.
How does your own list of Drupal security tips, techniques, and resources to tap into look like?
RADU SIMILEANU / Apr 06'2018
And I'm back, as promised, with 5 more key differences meant to help you solve your Apache Solr vs Elasticsearch dilemma.
To help you properly evaluate the 2 open source search engines and, therefore, to identify the perfect fit for your own use case and your project's particular needs.
6. Node Discovery
Another aspect that clearly differentiates the 2 search engines is the way(s) they handle node discovery.That is, whenever a new node joins the cluster or when there's something wrong with one of them, immediate measures, following certain criteria, need to be taken.
The 2 technologies handle this node-discovery challenge differently:
Apache Solr uses Apache Zookeeper — already a “veteran”, with plenty of projects in its “portfolio” — requiring external Zookeper instances (minimum 3 for a fault-tolerant SolrCloud cluster).
Elasticsearch relies on Zen for this, requiring 3 dedicated master nodes to properly carry out its discovery “mission”
7. Apache Solr vs Elasticsearch: Machine Learning
Machine learning has a way too powerful influence on the technological landscape these days not to take it into consideration in our Apache Solr vs Elasticsearch comparison here.
So, how do these 2 open source search engines support and leverage machine learning algorithms?
Apache Solr, for instance, comes with a built-in dedicated contrib module, on top of streaming aggregations framework; this makes it easy for you to use machine-learning ranking models right on top of Solr
Elasticsearch comes with its own X-Pack commercial plugin, along with the plugin for Kibana (supporting machine learning algorithms) geared at detecting anomalies and outlines in the time series data
8. Full-Text Search Features
In any Apache Solr vs Elasticsearch comparison, the first one's richness in full-text search related features is just... striking!
Its codebase's simply “overcrowded” with text-focused features, such as:
the functionality to correct user spelling mistakes
a heavy load of request parsers
configurable, extensive highlight support
a rich collection of request parsers
Even so, Elasticsearch “strikes back” with its own dedicated suggesters API. And what this feature does precisely is hiding implementation details from user sight, so that we can add our suggestions far more easily.
And, we can't leave out its highlighting functionality (both search engines rely on Lucene for this), which is less configurable than in Apache Solr.
9. Indexing & Searching: Text Searching vs Filtering & Grouping
As already mentioned in this post, any Apache Solr vs Elasticsearch debate is a:
Text-search oriented approach vs Filtering and grouping analytical queries type of contrast.
Therefore, the 2 technologies are built, from the ground up, so that they approach different, specific use cases:
Solr is geared at text search
Elasticsearch is always a far better fit for those apps where analytical type of queries, complex search-time aggregations need to be handled
Moreover, each one comes with its own “toolbox” of tokenizers and analyzers for tackling text, for breaking it down into several terms/tokens to be indexed.
Speaking of which (indexing), I should also point out that the two search engine “giants” handle it differently:
Apache Solr has the single-shard join index “rule”; one that gets replicated across all nodes (to search inter-document relationships)
Elasticsearch seems to be playing its “efficiency card” better, since it enables you to retrieve such documents using top_children and has_children queries
10. Shard Placement: Static by Nature vs Dynamic By Definition
Shard replacement: the last test that our two contestants here need to pass, so you can have your final answer to your “Apache Solr vs Elasticsearch” dilemma.
In this respect, Apache Solr is static, at least far more static than Elasticsearch. It calls for manual work for migrating shards whenever a Solr node joins or leaves the cluster.
Nothing impossible, simply less convenient and slightly more cumbersome for you:
you'll need to create a replica
wait till it synchronizes the data
remove the “outdated” node
Luckily for you, Elasticsearch is not just “more”, but “highly” dynamic and, therefore, far more independent.
It's capable to move around shards and indices, while you're being granted total control over shard placement:
by using awareness tags, you get to control where those shards should/shouldn't be placed
by using an API call you can guide Elasticsearch into moving shards around on demand
The END! Now if you come to think about it, my 10-point comparative overview here could be summed up to 2 key ideas worth remembering:
go for ApacheSolr if it's a standard text-search focused app that you're planning to build; if you already have hands-on experience working with it and you're particularly drawn to the open-source philosophy
go for Elasticsearch if it's a modern, real-time search application that you have in mind; one perfectly “equipped” to handle analytical queries. If your scenario calls for a distributed/cloud environment (since Elastic is built with out-of-the-ordinary scalability in mind)
RADU SIMILEANU / Mar 16'2018
Apache Solr vs Elasticsearch, the 2 leading open-source search engines... What are the main differences between these technologies?
Which one's faster? And which one's more scalable? How about ease-of-use?
Which one should you choose? Which search engine's the perfect fit for your own:
use case
specific needs
particular expectations?
Obviously, there's no universally applicable answer. Yet, there are certain parameters to use when evaluating these 2 technologies.
And this is precisely what we've come up with: a list of 10 key criteria to evaluate the two search engines by, revealing both their main strengths and most discouraging weakness.
So you can compare, weight pros and cons and... draw your own conclusions.
But First, A Few Words About The Two “Contestants”
I find it only natural to start any Apache Solr vs Elasticsearch comparison by briefly shading some light on their common origins:
Both open source search engine “giants” are built on the Apache Lucene platform. And this is precisely why you're being challenged with a significant number of similar functionalities.
Apache Solr
Already a mature and versatile technology, with a broad user community (including some heavy-weighting names: Netflix, Amazon CloudSearch, Instagram), Apache Solr is an open source search platform built on Lucene, a Java library.
And no wonder why these internet giants have chosen Solr. Its indexing and searching multiple sites capabilities are completed by a full set of other powerful features, too:
dynamic clustering
faceted search
NoSQL features & rich document handling
full-text search
real-time indexing
Elasticsearch
It's a (younger) distributed open source (RESTful) search engine built on top of Apache Lucene library.
Practically, it emerged as a solution to Solr's limitations in meeting those scalability requirements specific to modern cloud environments. Moreover, it's a:
multitenant-capable
distributed
full-text
... search engine, with schema-free JSON documents and HTTP web interfaces, that it “spoils” its users with.
And here's how Elasticsearch works:
It includes multiple indices that can be easily divided into shards which, furthermore, can (each) have their own “clusters” of replicas.
Each Elasticsearch node can have multiple (or just a single one) shards and the search engine is the one “in charge” with passing over operations to the right shards.
Now, if I am to highlight some of its power features:
analytical search
multi-tenancy
grouping & aggregation
distributed search
1. User and Developer Communities: Truly Open-Source vs Technically Open-Source
A contrast that we could define as:
“Community over code” philosophy vs Open codebase that anyone can contribute to, but that only “certified” committers can actually apply changes to.
And by “certified” I do mean Elasticsearch employees only.
So, you get the picture:
If it's a fully open-source technology that you're looking for, Apache Solr is the one. Its robust community of contributors and committers, coming from different well-known companies and its large user base make the best proof.
It provides a healthy project pipeline, everyone can contribute, so there's no one single company claiming the monopoly over its codebase.
One that would decide which changes make it to the code base and which don't.
Elasticsearch, on the other hand, is a single commercial entity-backed technology. Its code is right there, open and available to everyone on Github, and anyone can submit pull requests.
And yet: it's only Elasticsearch employees who can actually commit new code to Elastic.
2. What Specific Use Cases Do They Address?
As you can just guess it yourself:
There's a better or worse fit, in any Apache Solr vs Elasticsearch debate, depending exclusively on your use case.
So, let's see first what use cases are more appropriate for Apache Solr:
applications relying greatly on text-search functionality
complex scenarios with entire ecosystems of apps (microservices) using multiple search indexes, processing a heavy load of search-request operations
And now some (modern) use cases that call for Elasticsearch:
applications relying (besides the standard text-search functionality) on complex search-time aggregations, too
open-source log management use cases with many organizations indexing their logs in Elasticsearch in order to make them more searchable
use cases depending on high(er) query rates
data stores “supercharged” with capabilities for handling analytical type of queries (besides text searching)
… and pretty much any new project that you need to jump right onto, since Elasticsearch is much easier to get started with. You get to set up a cluster in no time.
3. Apache Solr vs Elastic Search: Which One's Best in Terms of Performance?
And a performance benchmark must be on top of your list when doing an Apache Solr vs Elasticsearch comparison, right?
Well, the truth is that, performance-wise, the two search engines are comparable. And this is mostly because they're both built on Lucene.
In short: there are specific use cases where one “scores” a better performance than the other.
Now, if you're interested in search speed, in terms of performance, you should know that:
Solr scores best when handling static data (thanks to its capability to use an uninverted reader for sorting and faceting and thanks to its catches, as well)
Elasticsearch, being “dynamic by nature”, performs better when used in... dynamic environments, such as log analysis use cases
4. Installation and Configuration
Elasticsearch is a clear winner at this test:
It's considerably easier to install, suitable even for a newbie, and lighter, too.
And yet (for there is a “yet”), this ease of deployment and use can easily turn against it/you. Particularly when the Elasticsearch cluster is not managed well.
For instance, if you need to add comments to every single configuration inside the file, then the JSON-based configuration, otherwise a surprisingly simple one, can turn into a problem.
In short, what you should keep in mind here is that:
Elastricsearch makes the best option if you're already using JSON
if not, then Apach Solr would make a better choice, thanks to its well-documented solrconfig.xml and schema.xml
5. Which One Scales Better?
And Elasticsearch wins this Apache Solr vs Elasticsearch test, too.
As already mentioned here, it has been developed precisely as an answer to some of Apache Solr well-known scalability shortcomings.
It's true, though, that Apache Solr comes with SolrCloud, yet its younger “rival”:
comes with better built-in scalability
it's designed, from the ground up, with cloud environments in mind
And so, Elasticsearch can be scaled to accommodate very large clusters considerably easier than Apach Solr. This is what makes it a far better fit for cloud and distributed environments.
And this is the END of PART 1. Stay tuned for I have 5 more key aspects “in store” for you, 5 more “criteria” to consider when running an Apache Solr vs Elasticsearch comparison!
Still a bit curious: judging by these 5 first key features only, which search engine do you think that suits your project best?
RADU SIMILEANU / Mar 16'2018