As mentioned already in a previous posts, I’ve just started learning Node in the last months. At first I had a rough start as the totally asynchronous nature of Node, and the many ways it can be leveraged, wasn’t something I was used to. I battled a bit with that, learned a lot, and also figured out how to properly use Tedious to take advantage of Azure SQL in my projects.
But using Tedious is…t̶e̶d̶i̶o̶u̶s̶ verbose. Also the way it manages all asynchronous calls is quite different than the modern async/await pattern.
Recently I found a quite common request on StackOverflow. Generalizing the problem, it can be described as the requirement of insert some data into a table only if that data is not there already.
Many developers will solve it by trying to execute two steps:
This approach has a flaw, whatever the database you are using and no matter the database if relational or not. The problem, in fact, lies in the algorithm itself.
The moment you have two actions, where one depends on another, you need to make sure…
If you are about to start your next project, which will presumably involve the creation of a backend REST API that will interact with data stored in a database, you should do so by using a modern approach and apply DevOps principles right from the start.
For doing that, GitHub Actions are an amazing tool to build a CI/CD pipeline. You know that already, I’m sure. (If not, make sure you take a look here: GitHub Actions).
Now, I could spend hours and hours on the subject, but I guess the best way to show that is with live coding…
Azure SQL Hyperscale is the latest architectural evolution of Azure SQL, which has been natively designed to take advantage of the cloud. One of the main key features of this new architecture is the complete separation of Compute Nodes and Storage Nodes. This allows for the independent scale of each service, making Hyperscale more flexible and elastic.
In this article I will describe how it is possible to implement a solution to automatically scale your Azure SQL Hyperscale database up or down, to adapt to different workload levels dynamically and automatically without requiring any manual intervention.
Just before Ignite, a very interesting case study done with RXR has been released, where they showcased their IoT solution to bring safety in building during COVID times. It uses Azure SQL to store warm data, allowing it to be served and consumed to all downstream users, from analytical application to mobile clients, dashboards, API and business users.
If you haven’t done yet, you definitely should watch the Ignite recording (the IoT part start at minute 22:59). Not only the architecture presented is super interesting, but also the guest presenting it — Tara Walker — is super entertaining and joyful…
If you want to start coding and create your own solutions — be it an App, a Website or something else — or if you want to start a career as developer, you’re in luck!
If you are new to Node.js like I am, using Tedious to access Azure SQL can be challenging at the beginning. My understanding is that Tedious, while being fully asynchronous, doesn’t support nor Promises nor the more modern async/await pattern. Tedious, in fact, uses events to execute asynchronous code and so a bit of work is needed to make it compatible with Promises.
At the end of the day is just a few lines of code, but the process of discovering those two lines can be quite long and sometime frustrating. There is no clear statement anywhere that shows how…
TodoMVC is a very well known (like ~27K GitHub stars known) application among developers as it is a really great way to start to learn a new Model-View-Something framework. It has plenty of samples done with different frameworks, all implementing exactly the same solution. This way is very easy to compare them against each other and see what is the one you prefer. Creating a To-Do App is easy enough, but not too easy, to be the perfect playground to learn a new technology.
I’m preparing a series of post and samples on how to properly load data into Azure SQL using Azure Databricks / Apache Spark that I will start to publish very soon, but I realized today that there is a pre-requisite that in many cases, especially by developers new to the data space, is overlooked: good table design.
Wait! If you’re not a Apache Spark user you might think this post is not for you. Please read on, it will be just a couple of minutes, and you will find something help also for you, I promise.
By good table design…
Azure Functions are another pretty popular solution that developers use to create scalable solution without having to deal with all the infrastructural woes, as it just allow you to code you own function, deploy it and….done! No IIS or Apache to configure and monitor, no headaches to setup, configure and maintain a load-balanced cluster….just the sheer joy of coding!