Follow NodeXperts Blog on Feedspot

Continue with Google
Continue with Facebook

NodeXperts Blog by Vinay Chaudhary - 11M ago

Image and media management being done in-house a project has always been a herculean task. I had to bite the bullet when I started working on a product called Market Cube.

Marketcube.io is a SaaS-based marketplace engine that instantly transforms any Shopify store into a fully functional marketplace, for products or services.

One of the crucial features of Marketcube is visualizing product images and we had more than a ton of images to visualize hence we faced issues like the slow rendering of images as we were serving the images directly from the database using the same server and applying transformations to images were difficult.

To eliminate these problems, we researched and found a lifesaver in Cloudinary which gave us a lot of features for image manipulation and management.

Cloudinary is a cloud-based solution for managing images. It was very easy to integrate it into our project. We simply used the cloudinary package.

Cloudinary has a descriptive documentation which further eased our task to implement in our existing project. After migrating all our images to Cloudinary, the magic started, all of the images are now rendering very fast as they are delivered through a fast CDN.

Image transformation in Cloudinary is as simple as applying some filters to the image URL provided by Cloudinary. For example, we have used the following techniques in Marketcube:

  • We can render an image of required height and width using w_x and h_y, where x is height and y is the width of an image.
Example: https://res.cloudinary.com/marketcube/image/upload/w_400,h_400/v1517483884/ayhwxxjrs0zfp4d2i97f.jpg
  • One of the most awesome features is to focus the face present in the image. We can use ‘g_face’ to focus the face present in the image, this feature is best to be used in profile pics.
Example: https://res.cloudinary.com/marketcube/image/upload/w_400,h_400,c_crop,g_face,r_max/v1517483884/ayhwxxjrs0zfp4d2i97f.jpg
  • We can also change the quality of an image according to our need, ‘q’ is to be set to set quality in image URL.

Cloudinary provides us both ‘http’ and ‘https’ URL so that it is easy for SSL certified as well as not certified.

We can add tags to our images in Cloudinary while we are saving or updating images. It made management of images very easy, following tags were added to our images in Marketcube:

  1. The unique id of a user.
  2. The unique id of the product.

It is now very easy to track images of any particular user or product. It helped us in the following ways:

  1. If a user is deleted from Market Cube, then we have to simply remove the images containing the user’s unique id as a tag.
  2. If a product is deleted from Market Cube, then we have to simply remove the images containing the product’s unique id as a tag.

The Above Management of images keeps removing the unnecessary images from Cloudinary, hence reduces cost as our storage is being utilized properly.

The post Image Management via Cloudinary appeared first on NodeXperts Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
NodeXperts Blog by Achin Gupta - 11M ago

When I was told to write about the software technology – Docker, I had two options. I could give you a concise explanation comprising of a technical definition, its functions, some statistics and its benefits. Or I could adopt an expository style of writing to demonstrate what it does, why there was a need for Docker and how it has helped solve a lot of problems. Like Robert Frost, when two roads diverge in a wood, I take the one less travelled by and hence, here begins my hermeneutic discourse.

Take a back seat and visualize!

Much to your dismay, your lease has expired and your landlord has asked you to vacate your room immediately since the place has been assigned to a new guest. Unprepared for the move, you try to negotiate and delay the process by a day or two. However, the landlord has his reservations as he is unaware of the exact time the new guest would check in. He tells you to leave immediately and take away as many belongings with you as possible; the rest can be stored in a cabinet and dealt with later.

In a jiffy, you pull out your largest suitcase, rummage through your drawers, cram as many belongings as possible and book a cab to your friend’s place. However, when the cab arrives, you notice that its boot is not large enough for your suitcase. Like a bear with a sore head, you try to balance the suitcase on the backseat but worry if something will break or leak due to instability. The cab driver, on the other hand, wishes he had a car with a bigger trunk so that situations like these would not arise.

Tough choices!!

Now, imagine if the dimensions and all other properties of such suitcases were fixed. All you’d have to worry about is what to put in the suitcase and what to leave out. Naturally, the cab too would have a boot of sufficient size since it has to be capacious enough for all passengers. You would not have to be anxious about how to stow the bag in the car. Is it kept in the right orientation? Is the trunk big enough? Will the suitcase be alright? These would not be your concerns since these will be taken care of by the cab driver already. Similarly, the driver would not have to bother about what’s in the bag or if the trunk is big enough because that is a prerequisite in itself.

So, why is this story relevant?

This forms the foundation of Docker. When developers design some software, often they need to ship it to another system and try to ensure that it behaves in a similar manner – much like your belongings that need to be transferred from one place to another without being hampered. This is done with the help of containerization, a process which is analogous to the standardization of a suitcase. Containers allow the developer to package the application with all the components needed to run it such as the code, runtime, system libraries and other dependencies. They provide a kind of protection or in other words, ensure that the application always runs in a particular fashion, irrespective of the environment.

Earth speak?

So, Docker provides developers with the luxury of being able to write code without fretting about what machine it will finally run on or how it will be deployed since it will eventually be containerized (similar to you not having to worry about whether your belongings will be safe or not since they will eventually go in a suitcase that can sit comfortably in the trunk of any cab). On the same lines, the operations team can perform any function with that container without bothering about what it is built of just like your cab driver, who can handle any suitcase in the world since he has the right infrastructure to deal with it.

With applications becoming more and more complex each day, such a tool was called-for and this explains why the release of Docker generated a huge buzz. Developed by the company Docker, Inc., this open-source project is functional for both Linux and Windows based apps.

Docker VS Virtual Machines

People often draw parallels between Docker and a virtual machine (VM). The latter emulates an entire computer system while the former concerns itself with just the operating system. Unlike VMs that are extremely bulky and slow to boot, Docker is an extremely efficient, lightweight and portable technology designed to not get in the way.

Thus, it has an upper hand and this can be exemplified as follows. To ship some belongings from one city to another, it is always easier to use a suitcase rather than transporting your entire room, primarily for two reasons. One, your room will have other items that might not be of high utility to you at that moment. Two, it is not exactly easy to haul an entire room, is it? Similarly, developers want the entire application but do not want to package all processors, network interface and hard drive along with it as it would make shipping much more inconvenient. (Of course, this would not be a problem for Lord Hanuman because he was fine with lifting the entire Dronagiri mountain from the Himalayas to Lanka for one magical herb, Sanjeevani.)

Let’s talk numbers..

451 research predicts that the container technology will turn from a $495 million market (2015 statistics) to a whopping $2.7 billion market by 2020. Docker, in itself, has been downloaded more than 13 billion times between its initial launch in 2013 and 2017. So, look out fellas! Because we might just be in the midst of a container revolution set forth by this portable container engine. It seems pretty harmless with its cute whale logo but has the potential to take the cloud computing world by storm!

The post Dock-err? to Docker! appeared first on NodeXperts Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

NodeXperts was part of the National Annual Technology Conference 2017. The NATC is organized in New Delhi/NCR each year by NASSCOM, the apex association for the Indian IT Industry.

The conference aims to bring together Indian IT professionals from all over the country and drive the conversation forward on the latest technology trends. Like last year, the core theme for the conference this year was Disruption. That’s right, Disruption with a capital D! In an industry characterized by quickly evolving paradigms – Disruption is not only inevitable, but often the driving force behind tech revolution.


This time around, the conference talked about a very specific kind of Disruption – the one brought in by the advent of Artificial Intelligence (AI) into mainstream tech. AI was the buzzword of the day and most of the talks were centered around it.

We joined the conference looking to learn more about Artificial Intelligence, how the giants in the Indian tech industry are using it, as well as aiming to network with people.


The Conference venue was The Leela Ambience Hotel in Gurugram. We arrived there at 9 AM to find the lobby full of people conversing over tea, coffee and cookies. Pretty soon, it was time to start the day’s agenda and we were ushered into the hall where talks were to take place.

Since the past year, AI has been both hailed as a boon and criticized as a bane. While AI brings new insights through the vast troves of information accumulated during the Big Data boom, there are scientific and ethical concerns to the manner and direction of growth of all artificial intelligence. Moreover, with jobs being lost to automation and Machine Learning, some engineers have been concerned about becoming redundant. This sentiment was prevalent throughout the conference. Few talks went by that did not collude in some way to this feeling within the people who power the tech industry. After the keynote speech, the fireside chat that kicked off the conversations especially focused on this.

The day would begin with a single track, which would then split into two parallel tracks after a mid-morning tea-break. Each parallel session pitted Artificial Intelligence against other streams, like Analytics, Software Architecture, DevOps and Security. Next came lunch, then some more parallel sessions. Another tea-break, then finally merging back together into a single track.

Let’s now take a look at some of the highlights and a few interesting talks of the day.


Manik Varma from Microsoft initiated the technical sessions of the day with his talk “The Extremes of Machine Learning”. He talked about his work on Extreme Classification, and showed an example on how his research would improve search mechanisms on e-commerce websites by providing context understanding of search terms.

Following that, he showcased his work on Microsoft’s EdgeML library, and ProtoNN and Bonsai algorithms. These technologies can be used to run ML engines on tiny microcontrollers with just 2 KB RAM and 32 KB Flash ROM! These processes can work in the cloud and make predictions locally using the EdgeML library, consuming very less resources. These engines can be hosted on microcontrollers as small as a dimple on a golf ball. This is extremely beneficial in application cases where the levels of size, processing power, memory and energy consumption are critically important, like medical pacemaker implants.

He also demonstrated the library in action by hooking up his walking cane with his phone, using physical gestures to have Cortana automatically read notifications off of his phone screen!


Next, James Geraci from Samsung took the stage with “Experience the future of work with AI and Automation”. He talked about how to garner intelligent information and value from data automatically using Data Intelligence (DI). He also defined some of the problems being faced in DI, like inconsistent data sets of the same type of information, how to sanitize it and use it.

Some examples of use cases of DI were to use it to make our appliances and machines more intelligent. Such that in places with dynamic pricing of electricity, an intelligent washing machine would automatically know to switch itself on when prices are low, to save electricity costs. It could be used to build automatic cars that are aware of any obstruction or accident further down the road in real time, and would switch you to another route well ahead, so that you don’t get stuck.

Another important thing in DI is the size of information stored in a unit of memory. On this, he advocated using qubits instead of bits to store large scale data, as they are exponentially more efficient in terms of space, energy and time resources required for storage.

He also imparted some mindful advice not to rely on automation always, and use our own critical thinking from time to time. He narrated an incident about his colleagues in Seoul, who had been using ML and DI for seven months to garner insights from some gathered data about loss of refrigerator coolant. However, they had picked the the wrong base factors (what actually causes the loss of coolant), so their DI results did not make sense. They only realized their mistake when they stopped the automated process and sat down to rethink – and found their base factors had been wrong all along!


In the post-lunch session, “The Future of Artificial Intelligence”, Ramana Jampala from Avlino Inc gave very keen insights on the science of Artificial Intelligence and what this term actually means.

He explained that the basic reasons for adoption of AI practices by the industry are based on two factors: Reduce the costs, and increase the revenue base. In the furore of this trend, a lot of things are being misconceived and are passing for Artificial Intelligence though those are not actually AI.

He stressed that AI (as it should be) is the science which focuses on the transcendence of computational capacity over manually generated hardware or hardwired logic. This means that AI is not just a matter of spinning up an API and have a server send responses computed from pre-programmed rules. To be true AI, the logic itself should grow, learn and create new rules for itself.

In order for a business to successfully navigate the waters of AI revolution, his advised businesses to target what he terms the AI Trifecta:

– Software Engineering
– Qualitative Analysis
– Domain Knowledge

Taken together, the above three factors will help in successful AI product development. This is why, he elaborated, the businesses that are more likely to create a successful AI product are:

– Larger corporations: As they have broad partnerships.
– Smaller but focused companies: As they have founders with deep domain knowledge, and are targeting a specific industry


NASSCOM had also conducted a Hackathon the previous day, focused on creating AI based solutions to problems being faced in India currently. The top winner were a group from Fidelity International that had created a solution to calculate GST for traders using AI. NASSCOM also awarded a special prize to a group of engineering students from IIT Gwalior that had created a fake news detection system, despite no professional experience!


Technology is hardly just about software. In line with this thought, the NATC organizers gave a special welcome to Team Indus from Bengaluru. Theirs is the domain of private sector space exploration. Recently several private companies have began their advent into this realm, SpaceX and Blue Origin being two of the most well-known. Come 2018, Team Indus will also count itself among them!

They are a private Indian aerospace startup, on a mission to land a space rover on the Moon. They are one of the finalists on Google’s Lunar XPRIZE competition. Sheelika Ravishankar from Team Indus gave a very spirited and exciting presentation on their mission and the origins of ECA – the robot that will mark India’s arrival on the surface of the moon.


The conference showcased various talented professionals and researchers from premier organizations, so it was rather interesting to see the ideation process behind the problems they are working on, and the solutions they are trying to effect.

In terms of composition, most of the talks were technically on a rather high level. It would have been nice to hear more talks that went deeper and explained the intricacies of AI on code level. It would also have been good to see some diversity in the panelists and speakers lineup.

NATC came across as a good place to network with people in the Indian IT industry. We took the opportunity and connected with various people as well. However, for a programmer who is looking to get their hands dirty in code and learn new concepts in technology, other developer-focused conferences and meetups would be better suited.

One of the main takeaways from the conference was that when it comes to AI, we should not simply get caught up in the flow of what is happening, rather take some time to think and understand what we want to do. That way we can create something meaningful with it, since it seems to have set the course of technology for the coming times.

The post Managing AI Disruption : NASSCOM Annual Tech Conference 2017 appeared first on NodeXperts Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
NodeXperts Blog by Pranav Kumar Tiwari - 11M ago

Purpose of writing this blog is to enlighten all the points which we must follow while we are developing any product within an organization.

Security of database is a major concern for any organization. If anything that has some value that is data. Database security is more than just important: it is an absolutely essential part of any organization. Each and every piece of data that has some value must be confidential and secure. Now let’s cover some of the basics but essential part that must be taken care of in regard to secure data, database, and stored function and procedure also.

Let’s first discuss what Database Security is.

What is Database Security

It refers to securing our store where we put our informative data. We secure our database store from illegitimate use and malicious threats and attacks.

It is similar as we secure our house or our assets by putting a lock on it so that the only person having the key can open it. This ensures that only authorized personnel have access to their respective data. 

Before discussing more on database security, let’s discuss some security threats to the database.

Threats to database security

Definition:  A threat, in the context of computer security, refers to anything that has the potential to cause serious harm to a computer system.

In computer security, a threat is a possible danger that might exploit a vulnerability to breach security and therefore cause possible harm.


Following are some threats that harm our database security:

Excessive privileges

The privilege of the database can be misused in many ways. If a user has privilege then he/she can either remove the data or can update some wrong data to a particular field. The user may abuse privilege for an unauthorized purpose.

Misuse of privileges can be done in any of the following ways:

  •  Excessive privilege abuse:  When you have permissions more than required is called excessive privilege abuse. For example, suppose you have right to view only your data, but somehow you have access to view your colleague data too, and you use it maliciously, then this is called excessive privilege abuse.
  • Legitimate privilege abuse:  If an employee which can view data of his junior employees and he/she has right to alter their data. If for any reason he/she alters any employee data that is not required then this refers to the legitimate privilege abuse.

This type of threat is very crucial, as an authorized person is misusing data. This type of threats harms an organization very much because once an employee got the privilege, he can abuse his colleague or junior data for any personal issue.

SQL Injection

 Every time when we want to access data from the database we execute some statements or queries. These queries are dynamically generated by the web page input.

When these queries are modified or replaced by some malicious statements then this type of threat refers to SQL Injection.

These malicious statements are used to destroy the database. SQL injection is one of the most common web hacking techniques.

SQL Injection is of two types

  • SQL Injection: Targets the tradition database system. It attacks usually involve injecting unauthorized statements into the input fields of applications.

Example:  Suppose you want to view a user which user id is 105 then below written query execute each time you make a request using the web.

Original query

SELECT * FROM Users WHERE UserId = 105;

Above query can be replaced by malicious statements that always gives true condition to the database and returns entire data of the user, whose user id is 105.

Malicious statement

SELECT * FROM Users WHERE UserId = 105 OR 1=1;

This query will always evaluate to true and will return entire record of the user whose user id is 105.

  • NoSQL Injection: Targets big data platforms. This type involves inserting malicious statements into big data components like Hive, MapReduce.


  "username": "admin",

  "password": {$gt: ""}


Here ‘$gt’ is an operator of MongoDB that means fetch record which is greater than null string. It will give access to the admin user account.

Loss of Data Integrity

 Includes data corruption and invalid data. Compromised data harms the organization in many ways.

  • Entering, creating and/or acquiring data
  • Processing and/or deriving data
  • Storing, replicating and distributing data
  • Archiving and recalling data
  • Backing up and restoring data
  • Deleting, removing and destroying data

Loss of Data availability

 It can be possible if network security has been compromised and data is stolen before reaching the actual user.

These are the causes that can compromise database security. Security must be the primary concern for any organization. It must follow through all the level of organization to build a strong security zone for database storage.

Now let’s discuss some points which must be followed by developers before working with the database.

Database security methods which must be followed by every developer are discussed below


Isolation of Database from Server

Always keep database server separate from the web server. Many times database is deployed on the same server on which project is running. This is a bad practice and we must put our database on the different server.  

Putting the database on the same server makes easy for an attacker to access it. They only have to crack administrator account on one server to have access to everything.

So we should avoid this situation by putting our database on a different server which has a firewall associated in front.

Keep your Files and backups Encrypted

 We keep our files and other confidential information in our database. We generally keep our data in simple text format. This simple text is easily readable by any others. It is not necessary that every time an attacker steals information sometimes a person who interacts with the database and trustworthy also steals or destroy information. So we should avoid these situations by encrypting files and backups too. We must encrypt important or confidential data that has some value.

Use a Web Application Firewall

 We must use a firewall in our web application. It does not only secure our website from cross-site scripting vulnerabilities and website vandalism but a good firewall can restrict SQL injection problem.

DBA plays a key role in DB Security

 We must have a DBA for our DB. DBA play important role in maintaining database security and other required operation. They keep track of all the operations that take place in the database.

Minimize use of third-party apps

 Use of third-party libraries makes attackers task easy to get access to the database. If any these types of app that pulls data from our database then these can harm our data. We should avoid using these types of libraries. If you are using any third party library to your project then make sure this is verified by valid sources.

  • Mongoose is an example of third party library which is verified and trustworthy.

Create Temporary tables or Views

 To increase security in our database we should create multiple views or temporary tables which are frequently accessed by web applications. Using these methods we keep web application away from our original database tables. Any modification or alteration does not directly affect our original database. This also helps to speed up our queries as the query has to process less number of records which will not create process overhead.


 To summarize, access protection begins with who can access data and what type of data attackers want to access. There is a lot of scopes to improve the techniques used for database security.

According to the survey following report has been generated.

  • 84% companies feel that database security is adequate.
  • 73% of companies that predict database attack is increasing day by day.
  • 48% of attackers are authorized users.
  • 48% of users have done misuse of their privileges.

So it is important that we should be aware of the basic guidelines for database security. Not only professional attackers harm our data but authorized user can also harm database.


[ 1 ]  Semantic Scholar PDF file.

[ 2 ] Tech Republic

[ 3 ] Imperva

The post Database Security appeared first on NodeXperts Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

MongoDB is a very famous Nosql database and there are various blogs that showcase, the goodness of mongodb. In this blog, I want to highlight some features of mongodb that may be new to some readers.

Geolocation data with MongoDB: 

We can save geolocation(Geospatial) data using GeoJson Objects. For example:

geoLocation: {
    type: "Point",
    coordinates: [-73.856077, 40.848447]

Above is an example of GeoJson Object namely GeoJson Point. Here we require following:

  • Type field that specifies GeoJson type, can be: LineString, Polygon,  GeometryCollection etc. For more, refer here.
  • Coordinates field that includes latitude and longitude
  • geoLocation is the field name.

The actual query is:

<field>: { 
    type: <GeoJSON type> , 
    coordinates: <coordinates> 

For more details, refer here.

  • No physical schema and no use of disk space.
  • Always read-only, write operations on views will throw an error.
  • Always execute a pre-defined query
  • Which makes it easier to fetch data
  • It adds a layer of abstraction and helps in data security
db.createView(<view>, <source>, <pipeline>, <options>)

View – name of view
Source – name of source collection
Pipeline – An array that consists of the aggregation pipeline. View is being created by applying the pipeline to source collection.
Options – Additional option

  [{ $project : 
    { _id : 0, address : {$concat : ["$address1", " ","$address2"]}}

In this example, we are going to focus on one field i.e address, which is a combination of values stored in address1 and address2 so whenever we run the find query:


The result set will show an address field, instead of 2 separate fields i.e. address1 and address2.

So the good thing about view is, It allows us to customize the resultant fields and we can perform the needed sort etc operation to get the desired results.

More can be read from here.

I hope, i was able to give a basic idea about handling geolocation data as well as views.

The post MongoDB: Geospatial Data and Views appeared first on NodeXperts Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
NodeXperts Blog by Mukul Jain - 11M ago

JSChannel, India’s own JavaScript conference.

If you’re a developer or a techie, you must always be looking for conferences or local Meetups, to meet like-minded people, to know what’s happening in the industry, and to see who is working on which tech and which one is going to be huge in coming years. Well, if you’re saying, “nope, I don’t”, well then I will highly recommend going out and experience these events. They are too much fun!!

Honestly, even I just started going to these Meetups a year ago, if I remember correctly, my first Meetup was a local community of Delhi/NCR, JSLovers. After I attended few Meetups, I tried for the speaker role at events and applied at a couple of events. Yes, you guessed it, jschannel was one of them. After submitting my CFP, a couple of months later I got a response from them that my talk has been selected. Ohh… and my talk was on service-workers and PWAs.

The event was taking place at JW-Marriot Bangalore, India for two days with almost 16 talks ranging from vue to react. I was going to give my talk at 10:45 AM on the first day, after the first speaker. On the day of the event, I thought, let’s check the place where the event is happening (5 floors down, to be honest) and the moment I stepped foot in the conference hall, I was awe-struck. At 9AM, the place was packed. I wasn’t expecting that at all. This just shows, how many people were excited to be the part of this event and if I forget to mention, Bangalore, where the event was taking place, is known as “Silicon Valley of India”. Well, now I know, how this city earned this name.

After the first talk, I was called upon and after my laptop was connected to the main screen (which took 30 seconds more than I expected lol), I started talking about Service workers, something every JavaScript developer should know nowadays (PWAs). And the experience was surreal, speaking in-front of my biggest audience so far, I think. But for me, the real fun started after talk was over, when attendees got the chance to ask questions and damn, people were excited about service workers. Again, I wasn’t completely sure if, in the era of react and angular, anyone will be interested in service workers. But even after the talk, many came to me just to ask how they can user service-workers in their apps. Well, that was about me and my talk.

Let’s talk about some other talks.

All talks were so good but I’m not gonna talk about every one of them here. I’m gonna only discuss, which were highlight for “me”. On the first day, two talks (well, technically one, you’ll get it later) literally threw me out. First one was a guest speaker from LA (I think), Simone. She codes games in python, knows Java and thinking of learning JavaScript and one small detail I missed here, she was just 10 years old. I can simply put it down by quoting one of the attendees comment during QA time, “you’re raising the bar of this conference”. Check her talk, you’ll be amazed. The second one, was of Sarah Drasner (interestingly, she followed Simone), who gave talk on Animation in Vue and canvas. And the animations she showed there, you won’t believe that they’re just some HTML tags.

Even on the second day, there were some great talks from Franziska, Parashuram and Andrew Clarke. It’s getting harder for me now, to just pick a couple of them, so I just won’t. I would highly suggest, to check those talks, you can learn a lot from those, I did.

But the best thing about the conference, as also stated by many speakers and inspiration of this blog’s title, was the spirit of the conference, created by the attendees. Two consecutive days, almost 9 straight hours, 8 talks each day and even then the enthusiasm wasn’t down a bit. They asked questions to the last speaker with the same excitement as they aksed the first one.

It was my first time at JSChannel so I can’t compare this with the last ones, but they did a great job this year. A special shout-out to the organizers for making this event possible.

I am gonna end by saying that, these conferences are a great place to learn new things and meet amazing people. These are a great way to be part of the community.

The post Spirit of JSChannel 2017 appeared first on NodeXperts Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Serverless is the hot topic these days. Everyone is exploring it, in hope to reduce the effort for server management. And why shouldn’t they, Uber, Instagram, Airbnb are just a few of the many tech giants which used it to their advantage. Other than server management, serverless also takes care of scaling. What I mean from auto-scaling is can be understood from this:

Your application can be scaled automatically or by adjusting its capacity through toggling the units of consumption (e.g. throughput, memory) rather than units of individual servers. – AWS Serverless Docs

Though I wanted to restrict this blog to only serverless, there is no way we can talk about it, without mentioning AWS. AWS has a complete platform for serverless, with more services than you can count on fingers. And all of them are fully handled and scalable. Check here for the AWS serverless platform docs.

However, it’s just not AWS which gives you options to make serverless apps, IBM openwhisk, Azure and Google cloud are also giving this architecture, but I can’t comment on those as I’ve never used them thus far.

So, let’s do a small FAQ on serverless.

What is Serverless?

Serverless is a way to create apps, where you don’t have to monitor, provision and manage your server, your third-party service provider will do it for you. It can be AWS, Azure or something else.

Wait, it means, there is no server, where is my code running then?

No no no! There is a server, there will always be a server (I think so). It’s just that you don’t have to worry about it. Your code is running in AWS cloud.

Is it going to cost more than usual server flow?

If something reduced your efforts, it should cost more. Well, not necessarily. I’ve used the lambda functions in many of apps and it doesn’t cost much. If your code is just “sitting idly” in the cloud, then you won’t be charged a penny. There’s a nice tool for checking how much it will cost based on the number of requests, the time it took and memory used, called ServerlessCalc.

These are the 3 questions I could think of at the time of writing this blog, but if you have more questions, feel free to ask in comments.

So far, we know these advantages of serverless architecture:

  • No server management
  • Auto-scaling
  • Pay for what are you using now

Now, let’s say you have decided to make a serverless app, the first question which should come in your mind, how? How do I make a serverless app?

Fortunately, for you and me, there are many services which provide functionality for creating serverless apps. A simple google search should give you more options than you need.

But here I will talk about AWS Serverless platform here. AWS serverless is a combination of many of it’s services, which only lets you focus on building a product rather than servers and clouds.

Let’s tick off some of those services

  • Database
    That’s the most basic need a developer can have and AWS provides a NoSQL, fully managed, cloud database, known as DynamoDb.
  • Service for computing logic
    “Less servers, more services”
    How about if you can have just a service which does some specific task, only runs when needed and scales when needed. AWS Lambda is what you’re looking for. It is, arguably, one of the most revolutionary services in cloud-history.
  • Object-based storage
    AWS S3 is a great service for saving files on the cloud as an object. Even Dropbox uses S3 to save its files. And you don’t even have to be a developer to use it, its web UI is so simple that anyone can use it just like any web tool. It’s highly secure and scalable and provided advanced features like versioning, caching, but that is not in the scope of this blog.

And the list doesn’t end here, there are many more services like Kinesis, Step Functions, Aurora, etc.

I am gonna end by saying that “serverless is not the future, it’s the present”.

The post Why you should go Serverless appeared first on NodeXperts Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Many of us struggle while taking any kind of decision, as we all want our decision must create an impact on each and everyone involved in it.

Our struggle rises when we are leading someone (mostly in an organisation) and are taking decisions for their betterment.
But later we think about our decision wistfully.

So, here are few things we need to follow while making our decision so that we don’t have to regret it later.

Identify, analyse and evaluate the situation using the following in the proper manner:


Observe and explore the situation with every angle, its merits and demerits. How it is going to affect our team in both positive or negative aspect. Jot down each and every point.


Use your prior experience as needed.


Put yourself into that situation. You are the one who is going to be affected by this and someone else is going to make this decision. List all the consequences of this.


Check out all the possible outcomes along with the suitable actions that could be taken. List down all the actions along with why you want to go with that and how it is going to affect the system.


Communicate with other team members to know their views without letting them know about this. And try to get the reviews from others. You may develop a questionnaire and disseminate to the team members.

Hope this process helps you in your decision making.

PS: Read somewhere and liked
Leaders who rise to the top are ones who constantly sharpen their strategic thinking by questioning their own views, by listening to different viewpoints, surrounding themselves by people with different expertise areas and doing anything else they can to gain different perspectives.

The post How to make an effective decision? appeared first on NodeXperts Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview