No se donde va a llegar este artículo, pero empecemos rememorando lo que ha sido este 2020. El 2020 empezó como había acabado el 2019, es decir, trabajando como Technical Coach en Voxel. Esto quería decir trabajar 4 días a la semana, de 9 a 17, haciendo algo que me gusta como ayudar a un equipo a crecer, y trabajando desde casa. Respecto a mi último trabajo había perdido algo de poder adquisitivo (mismo rate pero en euros en lugar de libras y 4 días a la semana en lugar de cinco), pero había ganado mucha calidad de vida.
This week I wanted to investigate how to use a Pub/Sub mechanism in an on-premise environment without needing to install anything else in our infrastructure, so using a managed service. I also wanted to make my first tests with CDK. Let me show you what I did. Goal What I want to accomplish is that a system that lives in our infrastructure communicates asynchronously with another system on it. It’s easy to simulate the sender of the message with a console application, but for the receiver I’ll need a public HTTP endpoint.
AWS recently introduced lambda destinations for asynchronous invocations. So, if you have, let’s say, a lambda function attached to an SNS event, you can configure a destination when the execution is successful and a destination when the execution fails. The destination can be either an SQS queue, an SNS topic, EventBridge or another Lambda function. As usual, the serverless framework implemented this feature quickly. Let’s take a look how to do it and what’s the difference with a DLQ.
Let me introduce you to Rachel, the new developer of the team. Rachel is an excellent developer eager to make a huge impact on the team and the organisation. When Rachel lands in the team, they treat them very well. They show her all the facilities, they teach her on how to make a good coffee with the new and shiny coffee machine and they even deploy to production in her first day.
The minute after I published my last article about Capturing and forwarding correlation IDs, my very good friend Hugo Biarge send me a Direct Message telling me: “Hey man! Have you read this article? This is new from ASP .NET Core 3, and it’s an easier solution than the one you explain in the article.” So, I took a look, not only at the article but also at the traces that I was already generating, and voilà, everything was already there.
When you have different services that communicate amongst them, you need to be able to correlate those calls to perform effective analysis of any problem you might have. The way to do this is using correlation ids and pass them along in all your call to the different services you use. In this article, I´m going to explain you a way to do this in ASP.NET Core. Correlation IDs Why do you need more than one correlation ID?
In the last two articles (here and here) we implemented some of the Serverless Patterns described in this article from Jeremy Daly. In this article, we’re going to concentrate in just one pattern, the Notifier. We’re going to do this, because of the [recent announcement] from AWS that you can now use an SQS queue as a Dead Letter Queue for an SNS topic. If you read the article, you will see that this DLQ is complementary to the DLQ you might define in a function that is triggered by an SNS topic, as Otavio Ferreira explains here.
In this article I will continue with the implementation of some Serverless Patterns described in this article from Jeremy Daly about [Serverless Patterns]. Check the first post here Let’s start! Common setup All the projects will have a common setup, which is fairly simple. First, initialize a NodeJS project: yarn init Then install the serverless framework as a dev dependency yarn add serverless --dev And finally create a script to deploy the project
I think that the best way to learn something is to practice it and to try to explain it, so this is what I’m going to do in the next series of posts. These posts will be based on the amazing article from Jeremy Daly about Serverless Patterns. I’m not going to copy Jeremy’s words here, so for each pattern, go to the article and read it. I’ll provide a technical implementation here and I will mention more resources I found interesting.
Congratulations! Your startup is starting to bring attention to many people and you’re starting to have clients from different countries and continents. But your lambdas and API gateway are still in your initial region, and that might add some latency to some users. Apart from that, you want to increase the reliability of your system. So, you decide to go multi-region. Can you do that easily? In this article, we’ll see how to do that.