Containers Trend Report. Explore the current state of containers, containerization strategies, and modernizing architecture.
Securing Your Software Supply Chain with JFrog and Azure. Leave with a roadmap for keeping your company and customers safe.
In the SDLC, deployment is the final lever that must be pulled to make an application or system ready for use. Whether it's a bug fix or new release, the deployment phase is the culminating event to see how something works in production. This Zone covers resources on all developers’ deployment necessities, including configuration management, pull requests, version control, package managers, and more.
Kubernetes is an open-source container orchestration platform that is used to manage and automate the deployment, scaling, and management of containerized applications. Azure DevOps is a cloud-based DevOps service that provides a complete CI/CD pipeline for building, testing, and deploying applications. In this article, I will discuss how to deploy a Kubernetes application using Azure DevOps. Prerequisites An Azure subscription An Azure DevOps account A Kubernetes cluster A Docker image Step 1: Create a Kubernetes Deployment File Create a Kubernetes deployment file (deployment.yaml) in your source code repository. This file should contain the specifications of your Kubernetes deployment, including the container image, replicas, and ports. Here is an example of a deployment file: YAML apiVersion: apps/v1 kind: Deployment metadata: name: my-app-deployment spec: replicas: 3 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-app-container image: my-app-image:latest ports: - containerPort: 8080 Step 2: Create an Azure DevOps Pipeline In your Azure DevOps account, create a new pipeline and select the source code repository where your deployment file is located. Choose the appropriate repository type (e.g., Git) and select the branch to use for the deployment. Next, choose the appropriate template for your pipeline. For Kubernetes deployments, we can use the “Deploy to Kubernetes” template, which is available in the Azure DevOps marketplace. Step 3: Configure the Azure Kubernetes Service (AKS) Connection In the pipeline, add a new task for configuring the AKS connection. This task will authenticate your pipeline to your AKS cluster. To add this task, search for “Kubernetes” in the task search bar and select the “Configure Kubernetes connection” task. In the task configuration window, select the appropriate Azure subscription and AKS cluster. Also, provide the Kubernetes namespace and service account information. Step 4: Add the Kubernetes Deployment Task After configuring the AKS connection, add the Kubernetes deployment task. Search for “Kubernetes” in the task search bar and select the “Deploy to Kubernetes” task. In the task configuration window, provide the path to your deployment file, select the appropriate image registry, and provide the container image name and tag. Step 5: Save and Run the Pipeline Save your pipeline and run it. The pipeline will build the Docker image, push it to the image registry, and deploy it to the Kubernetes cluster. Conclusion Kubernetes is a powerful tool for managing containerized applications. Azure DevOps provides a complete CI/CD pipeline for building, testing, and deploying applications. By using these tools together, we can easily deploy applications to Kubernetes clusters. With Azure DevOps, you can automate your deployment process and reduce manual errors, which can improve your application’s reliability and scalability. We covered the steps for creating a Kubernetes deployment file, creating an Azure DevOps pipeline, configuring the AKS connection, adding the Kubernetes deployment task, and running the pipeline. By following these steps, you can deploy your Kubernetes application using Azure DevOps. Kubernetes has become the de facto standard for container orchestration and management and with good reason. It is highly scalable, portable, and resilient, making it a great choice for deploying and managing containerized applications.
In today's interconnected world, application users can span multiple countries and continents. Maintaining low latency across distant geographies while dealing with data regulatory requirements can be a challenge. The geo-partitioning feature of distributed SQL databases can help solve that challenge by pinning user data to the required locations. So, let’s explore how you can deploy a geo-partitioned database cluster that complies with data regulations and delivers low latency across multiple regions using YugabyteDB Managed. Deploying a Geo-Partitioned Cluster Using YugabyteDB Managed YugabyteDB is an open-source distributed SQL database built on PostgreSQL. You can deploy a geo-partitioned cluster within minutes using YugabyteDB Managed, the DBaaS version of YugabyteDB. Getting started with a geo-partitioned YugabyteDB Managed cluster is easy. Simply follow the below: Select the Multi-region Deployment option. When creating a dedicated YugabyteDB Managed cluster, choose the “multi-region” option to ensure your data is distributed across multiple regions. Set the Data Distribution Mode to “partitioned." Select the "partition by region" data distribution option so that you can pin data to specific geographical locations. Choose target cloud regions. Place database nodes in the cloud regions of your choice. In this blog, we spread data across two regions - South Carolina (us-east1) and Frankfurt (europe-west3). Once you've set up a geo-partitioned YugabyteDB Managed cluster, you can connect to it and create tables with partitioned data. Create a Geo-Partitioned Table To demonstrate how geo-partitioning improves latency and data regulation compliance, let's take a look at an example Account table. First, create PostgreSQL tablespaces that let you pin data to the YugabyteDB nodes in the USA (usa_tablespace) or in Europe (europe_tablespace): SQL CREATE TABLESPACE usa_tablespace WITH ( replica_placement = '{"num_replicas": 3, "placement_blocks": [ {"cloud":"gcp","region":"us-east1","zone":"us-east1-c","min_num_replicas":1}, {"cloud":"gcp","region":"us-east1","zone":"us-east1-d","min_num_replicas":1}, {"cloud":"gcp","region":"us-east1","zone":"us-east1-b","min_num_replicas":1} ]}' ); CREATE TABLESPACE europe_tablespace WITH ( replica_placement = '{"num_replicas": 3, "placement_blocks": [ {"cloud":"gcp","region":"europe-west3","zone":"europe-west3-a","min_num_replicas":1}, {"cloud":"gcp","region":"europe-west3","zone":"europe-west3-b","min_num_replicas":1}, {"cloud":"gcp","region":"europe-west3","zone":"europe-west3-c","min_num_replicas":1} ]}' ); num_replicas: 3 - Each tablespace requires you to store a copy of data across 3 availability zones within a region. This lets you tolerate zone-level outages in the cloud. Second, create the Account table and partition it by the country_code column: SQL CREATE TABLE Account ( id integer NOT NULL, full_name text NOT NULL, email text NOT NULL, phone text NOT NULL, country_code varchar(3) ) PARTITION BY LIST (country_code); Third, define partitioned tables for USA and European records: SQL CREATE TABLE Account_USA PARTITION OF Account (id, full_name, email, phone, country_code, PRIMARY KEY (id, country_code)) FOR VALUES IN ('USA') TABLESPACE usa_tablespace; CREATE TABLE Account_EU PARTITION OF Account (id, full_name, email, phone, country_code, PRIMARY KEY (id, country_code)) FOR VALUES IN ('EU') TABLESPACE europe_tablespace; FOR VALUES IN ('USA') - If the country_code is equal to the ‘USA’, then the record is automatically placed or queried from the Account_USA partition that is stored in the usa_tablespace (the region in South Carolina). FOR VALUES IN ('EU') - Otherwise, if the record belongs to the European Union (country_code is equal to 'EU'), then it’s stored in the Account_EU partition from the europe_tablespace (the region in Frankfurt). Now, let’s examine the read-and-write latency when a user connects from the United States. Latency When Connecting From the United States Let’s open a client connection from Iowa (us-central1) to a database node located in South Carolina (us-east1) and insert a new Account record: SQL INSERT INTO Account (id, full_name, email, phone, country_code) VALUES (1, 'John Smith', 'john@gmail.com', '650-346-1234', 'USA'); As long as the country_code is 'USA', the record will be stored on the database node from South Carolina. The write and read latency will be approximately 30 milliseconds because the client requests need to travel between Iowa and South Carolina. Next, let’s see what happens when we add and query an account with the country_code set to 'EU': SQL INSERT INTO Account (id, full_name, email, phone, country_code) VALUES (2, 'Emma Schmidt', 'emma@gmail.com', '49-346-23-1234', 'EU'); SELECT * FROM Account WHERE id=2 and country_code='EU'; Since this account must be stored in a European data center and must be transferred between the United States and Europe, the latency increases. The latency for the INSERT (230 ms) is higher than for the SELECT (130 ms) because during the INSERT the record is replicated across three availability zones in Frankfurt. The higher latency between the client connection in the USA and the database node in Europe signifies that the geo-partitioned cluster makes you compliant with data regulatory requirements. Even if the client from the USA connects to the US-based database node and writes/reads records of residents from the European Union, those records will always be stored/retrieved from database nodes in Europe. Latency When Connecting From Europe Let’s see how the latency improves if you open a client connection from Frankfurt (europe-west3) to the database node in the same region, and query the European record recently added from the USA: This time the latency is as low as 3 milliseconds (vs. 130 ms when you queried the same record from the USA) because the record is stored in and retrieved from European data centers. Adding and querying another European record also maintains low latency, as long as the data is not replicated to the United States. SQL INSERT INTO Account (id, full_name, email, phone, country_code) VALUES (3, 'Otto Weber', 'otto@gmail.com', '49-546-33-0034', 'EU'); SELECT * FROM Account WHERE id=3 and country_code='EU'; When accessing data stored in the same region, latency is significantly reduced. The result is a much better user experience while remaining compliant with data regulatory requirements. Wrap Up Geo-partitioning is an effective way to comply with data regulations and achieve global low latency. By deploying a geo-partitioned cluster using YugabyteDB Managed, it's possible to intelligently distribute data across regions, while maintaining high-performance querying capabilities.
Event-driven architectures are becoming increasingly popular as a way to build scalable, decoupled, and resilient systems. Sveltos is an open-source to deploy add-ons in tens of #Kubernetes clusters. Sveltos also has a built-in event-driven framework that makes it easy to deploy add-ons on Kubernetes clusters as a result of events. What Is Sveltos? Sveltos is a powerful open-source project that makes managing Kubernetes add-ons a breeze. It automatically discovers ClusterAPI-powered clusters and allows you to easily register any other cluster (like GKE). Then, it seamlessly manages Kubernetes add-ons across all your clusters. But that's not all! Sveltos comes loaded with features like event-driven add-on deployment (using Lua scripts), configuration drift detection, and multi-tenancy to easily manage permissions for tenant admins. You can even preview changes with a dry run and rollback configurations with ease. Event-Driven Sveltos' event-driven framework enables the deployment of add-ons in response to events occurring in managed Kubernetes clusters. Events are defined in the form of Lua scripts, which provide a flexible and powerful way to define complex event conditions. Add-ons are expressed as templates that define the desired state of the resources that will be created in response to the event. These templates can be instantiated using information from resources in the managed clusters, such as labels or annotations, making it easy to dynamically generate new resources based on specific conditions. With Sveltos, you can create highly scalable and flexible event-driven systems that can automatically respond to changes in your managed Kubernetes clusters. Whether you're deploying new resources or modifying existing ones, Sveltos makes it easy to manage your infrastructure in a highly automated and efficient way, all while maintaining the reliability and scalability that Kubernetes is known for. Sveltos' event-driven framework is covered here. Cross-Cluster Configuration Sveltos' event-driven framework is designed to also enable communication and coordination between multiple Kubernetes clusters, which makes it ideal for cross-cluster configuration. By leveraging Sveltos' cross-cluster configuration, you can create distributed event-driven systems that span multiple clusters, enabling you to build more complex and scalable applications. Sveltos, by default, will deploy add-ons in the very same cluster an event is detected. Sveltos, though, can also be configured for cross-cluster configuration: watch for events in a cluster and deploy add-ons in a set of different clusters. EventBasedAddOn CRD has a field called destinationClusterSelector, a Kubernetes label selector. This field is optional and not set by default. In such a case, Sveltos' default behavior is to deploy add-ons in the same cluster where the event was detected. If this field is set, Sveltos' behavior will change. When an event is detected in a cluster, add-ons will be deployed in all the clusters matching the label selector destinationClusterSelector. Sveltos cross-cluster configuration. Imagine you're managing two Kubernetes clusters — one on GKE and another provisioned by ClusterAPI. How do you ensure they're working in harmony? That's where Sveltos comes in. With Sveltos, you can create an EventSourcethat matches any Service with a load balancer IP and an EventBasedAddOnthat references the EventSource and deploys a selector-less Service and corresponding Endpoints in any cluster matching destinationClusterSelector. In simpler terms, Sveltos allows you to watch for specific events (like load balancer services) and respond to them by automatically deploying new resources across multiple clusters. By defining templates that specify how resources should be instantiated, Sveltos can create new Services and Endpoints on the fly, making it easy to manage complex, distributed systems. For example, in the GKE cluster, you can create a deployment and a Service of type LoadBalancer. The Service will be assigned an IP address and match the EventSource, triggering Sveltos to deploy the selector-less Service and Endpoints in the other cluster — the cluster-API provisioned cluster. And just like that, a pod in the cluster-API provisioned cluster can now reach the service in the GKE cluster, thanks to Sveltos' cross-cluster configuration. Here Are the YAMLs YAML apiVersion: lib.projectsveltos.io/v1alpha1 kind: EventSource metadata: name: load-balancer-service spec: collectResources: true group: "" version: "v1" kind: "Service" script: | function evaluate() hs = {} hs.matching = false hs.message = "" if obj.status.loadBalancer.ingress ~= nil then hs.matching = true end return hs end YAML apiVersion: lib.projectsveltos.io/v1alpha1 kind: EventBasedAddOn metadata: name: service-policy spec: sourceClusterSelector: env=production destinationClusterSelector: dep=eng eventSourceName: load-balancer-service oneForEvent: true policyRefs: - name: service-policy namespace: default kind: ConfigMap --- apiVersion: v1 kind: ConfigMap metadata: name: service-policy namespace: default data: service.yaml: | kind: Service apiVersion: v1 metadata: name: external-{{ .Resource.metadata.name } namespace: external spec: selector: {} ports: {{ range $port := .Resource.spec.ports } - port: {{ $port.port } protocol: {{ $port.protocol } targetPort: {{ $port.targetPort } {{ end } endpoint.yaml: | kind: Endpoints apiVersion: v1 metadata: name: external-{{ .Resource.metadata.name } namespace: external subsets: - addresses: - ip: {{ (index .Resource.status.loadBalancer.ingress 0).ip } ports: {{ range $port := .Resource.spec.ports } - port: {{ $port.port } {{ end } Sveltos is a powerful tool for managing distributed Kubernetes clusters and building scalable, reliable applications that can seamlessly span multiple clusters. With Sveltos, you can create event-driven systems that respond automatically to changes in your infrastructure, making it easy to build complex, distributed systems that work together seamlessly. Summary Sveltos' event-driven framework can be configured for cross-cluster deployment, allowing resources to be deployed across multiple clusters. An EventSource can be defined to watch for specific events, such as load balancer services in any cluster. An EventBasedAddOn can be configured to respond to those events by deploying new resources in other clusters. Sveltos can use templates to instantiate new resources, making it easy to manage complex distributed systems. With Sveltos, you can build scalable, reliable applications that can seamlessly span multiple clusters, making it a powerful tool for managing Kubernetes infrastructure.
The console.log function — the poor man’s debugger — is every JavaScript developer’s best friend. We use it to verify that a certain piece of code was executed or to check the state of the application at a given point in time. We may also use console.warn to send warning messages or console.error to explain what happened when things have gone wrong. Logging makes it easy to debug your app during local development. But what about debugging your Node.js app while it’s running in a hosted cloud environment? The logs are kept on the server, to which you may or may not have access. How do you view your logs then? Most companies use application performance monitoring tools and observability tools for better visibility into their hosted apps. For example, you might send your logs to a log aggregator like Datadog, Sumo Logic, or Papertrail where logs can be viewed and queried. In this article, we’ll look at how we can configure an app that is hosted on Render to send its system logs to Papertrail by using Render Log Streams. By the end, you’ll have your app up and running — and logging — in no time. Creating Our Node.JS App and Hosting It With Render Render is a cloud hosting platform made for developers by developers. With Render, you can easily host your static sites, web services, cron jobs, and more. We’ll start with a simple Node.js and Express app for our demo. You can find the GitHub repo here. You can also view the app here. To follow along on your machine, fork the repo so that you have a copy running locally. You can install the project’s dependencies by running yarn install, and you can start the app by running yarn start. Easy enough! Render Log Streams demo app Now it’s time to get our app running on Render. If you don’t have a Render account yet, create one now. It’s free! Once you’re logged in, click the “New” button and then choose the “Web Service” option from the menu. Creating a new web service This will take you to the next page where you’ll select the GitHub repo you’d like to connect. If you haven’t connected your GitHub account yet, you can do so here. And if you have connected your GitHub account but haven’t given Render access to your specific repo yet, you can click the “Configure account” button. This will take you to GitHub, where you can grant access to all your repos or just a selection of them. Connecting your GitHub repo Back on Render, after connecting to your repo, you’ll be taken to a configuration page. Give your app a name (I chose the same name as my repo, but it can be anything), and then provide the correct build command (yarn, which is a shortcut for yarn install) and start command (yarn start). Choose your instance type (free tier), and then click the “Create Web Service” button at the bottom of the page to complete your configuration setup. Configuring your app With that, Render will deploy your app. You did it! You now have an app hosted on Render’s platform. Log output from your Render app’s first deployment Creating Our Papertrail Account Let’s now create a Papertrail account. Papertrail is a log aggregator tool that helps make log management easy. You can create an account for free — no credit card is required. Once you’ve created your account, click on the “Add your first system” button to get started. Adding your first system in Papertrail This will take you to the next page which provides you with your syslog endpoint at the top of the screen. There are also instructions for running an install script, but in our case, we don’t actually need to install anything! So just copy that syslog endpoint, and we’ll paste it in just a bit. Syslog endpoint Connecting Our Render App to Papertrail We now have an app hosted on Render, and we have a Papertrail account for logging. Let’s connect the two! Back in the Render dashboard, click on your avatar in the global navigation, then choose “Account Settings” from the drop-down menu. Render account settings Then in the secondary side navigation, click on the “Log Streams” tab. Once on that page, you can click the “Add Log Stream” button, which will open a modal. Paste your syslog endpoint from Papertrail into the “Log Endpoint” input field, and then click “Add Log Stream” to save your changes. Adding your log stream You should now see your Log Stream endpoint shown in Render’s dashboard. Render Log Stream dashboard Great! We’ve connected Render to Papertrail. What’s neat is that we’ve set up this connection for our entire Render account, so we don’t have to configure it for each individual app hosted on Render. Adding Logs to Our Render App Now that we have our logging configured, let’s take it for a test run. In our GitHub repo’s code, we have the following in our app.js file: JavaScript app.get('/', (req, res) => { console.log('Log - home page'); console.info('Info - home page'); console.warn('Warn - home page'); console.error('Error - home page'); console.debug('Debug - home page'); return res.sendFile('index.html', { root: 'public' }); }); When a request is made to the root URL of our app, we do a bit of logging and then send the index.html file to the client. The user doesn’t see any of the logs since these are server-side rather than client-side logs. Instead, the logs are kept on our server, which, again, is hosted on Render. To generate the logs, open your demo app in your browser. This will trigger a request for the home page. If you’re following along, your app URL will be different from mine, but my app is hosted here. Viewing Logs in Papertrail Let’s go find those logs in Papertrail. After all, they were logged to our server, but our server is hosted on Render. In your Papertrail dashboard, you should see at least two systems: one for Render itself, which was used to test the account connection, and one for your Render app (“render-log-stream-demo” in my case). Papertrail systems Click on the system for your Render app, and you’ll see a page where all the logs are shown and tailed, with the latest logs appearing at the bottom of the screen. Render app logs in Papertrail You can see that we have logs for many events, not just the data that we chose to log from our app.js file. These are the syslogs, so you also get helpful log data from when Render was installing dependencies and deploying your app! At the bottom of the page, we can enter search terms to query our logs. We don’t have many logs here yet, but where you’re running a web service that gets millions of requests per day, these log outputs can get very large very quickly. Searching logs in Papertrail Best Practices for Logging This leads us to some good questions: Now that we have logging set up, what exactly should we be logging? And how should we be formatting our logs so that they’re easy to query when we need to find them? What you’re logging and why you’re logging something will vary by situation. You may be adding logs after a customer issue is reported that you’re unable to reproduce locally. By adding logs to your app, you can get better visibility into what’s happening live in production. This is a reactive form of logging in which you’re adding new logs to certain files and functions after you realize you need them. As a more proactive form of logging, there may be important business transactions that you want to log all the time, such as account creation or order placement. This will give you greater peace of mind that events are being processed as expected throughout the day. It will also help you see the volume of events generated in any given interval. And, when things do go wrong, you’ll be able to pinpoint when your log output changed. How you format your logs is up to you, but you should be consistent in your log structure. In our example, we just logged text strings, but it would be even better to log our data in JSON format. With JSON, we can include key-value pairs for all of our messages. For each message, we might choose to include data for the user ID, the timestamp, the actual message text, and more. The beauty of JSON is that it makes querying your logs much easier, especially when viewing them in a log aggregator tool that contains thousands or millions of other messages. Conclusion There you have it — how to host your app on Render and configure logging with Render Log Streams and Papertrail. Both platforms only took minutes to set up, and now we can manage our logs with ease. Keep in mind that Render Log Streams let you send your logs to any of several different log aggregators, giving you lots of options. For example, Render logs can be sent to Sumo Logic. You just need to create a Cloud Syslog Source in your Sumo Logic account. Or, you can send your logs to Datadog as well. With that, it’s time for me to log off. Thanks for reading, Happy coding, and happy logging!
Git is a powerful tool for developers, enabling them to track changes in their code, collaborate with others, and manage different versions of a code file. A key feature of Git is the ability to cherry-pick commits — selectively applying changes from one branch to another. This tutorial explores using the Git cherry-pick command to apply selected commits. In this guide, you will learn how to use the Git cherry-pick command to apply specific commits from one branch to another. By the end of this post, you'll be able to navigate through your Git commits history, selectively apply changes, and resolve any conflicts that arise during this process. Step 1: Understanding Git Commits and Cherry-Pick Before we dive into the practical aspect of using the Git cherry-pick command, it's crucial to understand the concepts of Git commits and what cherry-picking in Git entails. Understanding Git Commits In Git, a commit is a snapshot of your repository at a certain point in time. It includes all the changes you've made since the last commit. Each commit in Git has a unique hash identifier, which is a string of alphanumeric characters generated by a hashing algorithm. This hash serves as an address that allows Git to recall, compare, or manipulate the commit later. By creating commits, you're effectively saving different versions of your code file. These versions can be reviewed, compared, and even reverted, offering great flexibility and control over your project's development. Understanding Git Cherry-Pick Git cherry-pick is a powerful command that enables you to "pick" a commit from one branch and apply it to another branch. This can be very useful in several scenarios: You made a commit on the wrong branch by mistake and want to apply that commit to the correct branch. You're working on a feature branch and made a bug fix that also needs to be on the main branch. You want to avoid merging an entire branch, but there's a specific commit on that branch that you want to include in your current branch. It's worth noting that the cherry-pick operation does not remove the commit from the source branch. Instead, it creates a new commit on the target branch that includes the changes from the cherry-picked commit. This way, the history of both branches remains intact. In the following steps, you will learn how to use the Git cherry-pick command. Understanding the basic concepts of Git commits and cherry-pick is the first step towards leveraging the power of version control in Git. Step 2: Using Git Cherry-Pick on a Single Commit Now that we have a foundational understanding of Git commits and cherry-pick, let's use the cherry-pick command. In this step, you will apply a single commit from one branch to another. Switch to the Target Branch Before you cherry-pick a commit, ensure you're on the branch where you want to apply the commit. Use the git checkout command to switch to this branch: git checkout <target-branch-name> Replace <target-branch-name> with the name of your target branch. Identify the Commit Hash Next, you need to identify the commit you want to cherry-pick. You can look at your commit history with the git log command. This command will show you a list of all commits, each with its unique hash, author, and commit message. git log You'll see an output similar to the following: commit d4e7618b062bfbeb8f79f430afe5a69a2c2b3396 (HEAD -> main) Author: Your Name <yourname@example.com> Date: Wed Feb 9 14:00:19 2023 -0500 Fixed the bug in the login feature commit c3e5749b64e4d3f93f3d5c6e6c5056757e8a74b1 Author: Your Name <yourname@example.com> Date: Tue Feb 8 11:25:03 2023 -0500 Added new feature From the git log output, identify the commit's hash you want to cherry-pick. The hash is the alphanumeric string after the word "commit." In this case, if we wanted to cherry-pick the commit where we fixed a bug, we'd copy the hash `d4e7618b062bfbeb8f79f430afe5a69a2c2b3396`. Apply the Commit With Git Cherry-Pick Now that you have the commit hash, you can apply this commit to your current branch using the git cherry-pick command followed by the commit hash: git cherry-pick d4e7618b062bfbeb8f79f430afe5a69a2c2b3396 Replace d4e7618b062bfbeb8f79f430afe5a69a2c2b3396 with the hash of your commit. Once you run this command, Git will apply the changes from the specified commit to your current branch and create a new commit for these changes. You've now successfully cherry-picked a single commit! In the following steps, you will learn how to cherry-pick multiple commits and resolve conflicts during the cherry-picking process. Step 3: Using Git Cherry-Pick on Multiple Commits In the previous step, we learned how to use git cherry-pick to apply a single commit from one branch to another. But what if you want to apply multiple commits? In this step, we'll explore how you can cherry-pick multiple commits. Switch to the Target Branch As with cherry-picking a single commit, ensure you're on the branch where you want to apply the commits. Use the git checkout command to switch to this branch: git checkout <target-branch-name> Replace <target-branch-name> with the name of your target branch. Identify the Commit Hashes Next, you need to identify the commits you want to cherry-pick. Use the git log command to view your commit history and the corresponding commit hashes. git log This command will show you an output similar to the following: commit d4e7618b062bfbeb8f79f430afe5a69a2c2b3396 (HEAD -> main) Author: Your Name <yourname@example.com> Date: Wed Feb 9 14:00:19 2023 -0500 Fixed the bug in the login feature commit c3e5749b64e4d3f93f3d5c6e6c5056757e8a74b1 Author: Your Name <yourname@example.com> Date: Tue Feb 8 11:25:03 2023 -0500 Added new feature From the git log output, identify the hashes of the commits you want to cherry-pick. The hashes are the alphanumeric strings that appear after the word "commit." Apply the Commits With Git Cherry-Pick Now that you have the commit hashes, you can apply these commits to your current branch using the git cherry-pick command followed by the commit hashes: git cherry-pick d4e7618b062bfbeb8f79f430afe5a69a2c2b3396 c3e5749b64e4d3f93f3d5c6e6c5056757e8a74b1 Replace d4e7618b062bfbeb8f79f430afe5a69a2c2b3396 and c3e5749b64e4d3f93f3d5c6e6c5056757e8a74b1 with the hashes of your commits. Note: Git applies the commits in the order you provide them. So, in the command above, Git will first apply commit d4e7618b062bfbeb8f79f430afe5a69a2c2b3396, and then apply commit c3e5749b64e4d3f93f3d5c6e6c5056757e8a74b1. Once you run this command, Git will apply the changes from the specified commits to your current branch and create a new commit for these changes. Congratulations! You've now successfully cherry-picked multiple commits! In the next step, you will learn how to resolve conflicts during cherry-picking. Step 4: Resolving Conflicts During Cherry-Picking During the process of cherry-picking commits from one branch to another, conflicts can arise. These conflicts usually occur when the changes in the commit you're trying to git cherry-pick contradict the changes already in your current branch. Git cannot decide which change to accept, so a conflict arises. In this step, you'll learn how to resolve conflicts during cherry-picking. Let's assume you're cherry-picking a commit, and a conflict has occurred. Git will pause the cherry-picking process and give you an error message similar to this: error: could not apply fa39187... some commit message hint: after resolving the conflicts, mark the corrected paths hint: with 'git add <paths>' or 'git rm <paths>' hint: and commit the result with 'git commit.' Identifying and Viewing Conflicts To identify the files that are causing conflicts, use the git status command: git status Git will show you a list of the files that are causing conflicts. They are usually marked as "unmerged." You can then open these files with your preferred text editor. Inside the files, you'll find the conflicting changes marked in the following way: <<<<<<< HEAD changes made on the current branch ======= changes made in the commit you're cherry-picking >>>>>>> name of the commit you're cherry-picking Resolving the Conflicts Resolving the conflict involves deciding which changes to keep. You may keep the changes in the current branch, the changes in the commit you're cherry-picking, or a combination of both. Edit the file to merge the changes manually. Once you've made your decision, remove the conflict markers (`<<<<<<<`, `=======`, `>>>>>>>`) and save the file. Continuing the Cherry-Pick After resolving the conflict in a file, you need to mark it as resolved with Git. Use the `git add` command followed by the filename: git add filename Once you've resolved all conflicts and marked the files as resolved, you can continue the cherry-pick process with the following: git cherry-pick --continue Git will then create a new commit with the changes from the cherry-picked commit. If there are no more conflicts, the cherry-pick operation will be complete. If there are more conflicts with the next commit (when cherry-picking multiple commits), the process will pause again, allowing you to resolve these conflicts. Aborting the Cherry-Pick If you decide not to continue with the cherry-pick, you can abort the operation using: git cherry-pick --abort That command will stop the cherry-picking process and bring your branch back to its state before you start the cherry-pick. Remember, conflict resolution in Git involves a good understanding of the changes that have been made and how they should merge together. Always review the code and test the application after resolving conflicts to ensure everything works as expected. Conclusion In this article, you learned how to use the Git cherry-pick command to apply specific commits from one branch to another. Now you can selectively pull changes into your current working branch, enhancing your Git workflow. It's important to remember that while cherry-pick is a powerful tool, there are better methods for integrating changes from one branch to another. Merge and rebase offer alternative approaches that maintain a clearer history of your project's development.
Native Image technology is gaining traction among developers whose primary goal is to accelerate startup time of applications. In this article, we will learn how to turn Java applications into native images and then containerize them for further deployment in the cloud. We will use: Spring Boot 3.0 with baked-in support for Native Image as the framework for our Java application; Liberica Native Image Kit (NIK) as a native-image compiler; Alpaquita Stream as a base image. Building Native Images from Spring Boot Apps Installing Liberica NIK It would be best to utilize a powerful computer with several gigabytes of RAM to work with native images. Opt for a cloud service provided by Amazon or a workstation so as not to overload the laptop. We will be using Linux bash commands further on because bash is a perfect way of accessing the code remotely. macOS commands are similar. As for Windows, you can use any alternative, for instance, bash included in the Git package for Windows. Download Liberica Native Image Kit for your system. Choose a Full version for our purposes. Unpack tar.gz with: tar -xzvf ./bellsoft-liberica.tar.gz Now, put the compiler to $PATH with: GRAALVM_HOME=/home/user/opt/bellsoft-liberica export PATH=$GRAALVM_HOME/bin:$PATH Check that Liberica NIK is installed: java -version openjdk version "17.0.5" 2022-10-18 LTS OpenJDK Runtime Environment GraalVM 22.3.0 (build 17.0.5+8-LTS) OpenJDK 64-Bit Server VM GraalVM 22.3.0 (build 17.0.5+8-LTS, mixed mode, sharing) native-image --version GraalVM 22.3.0 Java 17 CE (Java Version 17.0.5+8-LTS) If you get the error "java: No such file or directory" on Linux, you installed the binary for Alpine Linux, not Linux. Check the binary carefully. Creating a Spring Boot Project The easiest way to create a new Spring Boot project is to generate one with Spring Initializr. Select Java 17, Maven, JAR, and Spring SNAPSHOT-version (3.0.5 at the time of writing this article), then fill in the fields for project metadata. We don’t need any dependencies.Add the following code to you main class:System.out.println("Hello from Native Image!"); Spring has a separate plugin for native compilation, which utilizes multiple context dependent parameters under the hood. Let’s add the required configuration to our pom.xml file: XML <profiles> <profile> <id>native</id> <build> <plugins> <plugin> <groupId>org.graalvm.buildtools</groupId> <artifactId>native-maven-plugin</artifactId> <executions> <execution> <id>build-native</id> <goals> <goal>compile-no-fork</goal> </goals> <phase>package</phase> </execution> </executions> </plugin> </plugins> </build> </profile> </profiles> Let’s build the project with the following command: ./mvnw clean package -Pnative The resulting native image is in the target directory. Write a Dockerfile We need to write a Dockerfile to generate a Docker image container. Put the following file into the application folder: Dockerfile FROM bellsoft/alpaquita-linux-base:stream-musl COPY target/native-image-demo . CMD ["./native-image-demo"] Where we: Create an image with Alpaquita Linux base image (the native image doesn’t need a JVM to execute); Copy the app into the new image; Run the program inside the container. We can also skip the step with Liberica NIK installation and we build a native image straight in a container, which is useful when the development and deployment architectures are different. For that purpose, create another folder and put there your application and the following Dockerfile: Dockerfile FROM bellsoft/liberica-native-image-kit-container:jdk-17-nik-22.3-stream-musl as builder WORKDIR /home/myapp ADD native-image-demo /home/myapp/native-image-demo RUN cd native-image-demo && ./mvnw clean package -Pnative FROM bellsoft/alpaquita-linux-base:stream-musl WORKDIR /home/myapp COPY --from=builder /home/myapp/native-image-demo/target/native-image-demo . CMD ["./native-image-demo"] Where we: Specify the base image for Native Image generation; Point to the directory where the image will execute inside Docker; Copy the program to the directory; Build a native image; Create another image with Alpaquita Linux base image (the native image doesn’t need a JVM to execute); Specify the executable directory; Copy the app into the new image; Run the program inside the container. Build a Native Image Container To generate a native image and containerize it, run: docker build . Note that if you use Apple M1, you may experience troubles with building a native image inside a container. Check that the image was create with the following command: Dockerfile docker images REPOSITORY TAG IMAGE ID CREATED SIZE <none> <none> 8ebc2a97ef8e 18 seconds ago 45.2MB Tag the newly created image: docker tag 8ebc2a97ef8e nik-example Now you can run the image with: docker run -it --rm 8ebc2a97ef8e Hello from Native Image! Conclusion Native image containerization is as simple as creating Docker container images of standard Java apps. Much trickier is to migrate a Java application to Native Image. We used a simple program that didn’t require any manual configuration. But dynamic Java features (Reflection, JNI, Serialization, etc.) are not supported by GraalVM, so you have to make the native-image tool aware of them.
Welcome to the follow-up to How To Do Code Reviews, with many moooooore details on the human factors involved in a code review, as well as several options on how to approach reviewing pull requests. Just a quick recap, what is the scenario? A user sent in a GitHub pull request for our Google Photos clone, which means we have to do a code review. How should you do such a review? What is or isn't important? Let's find out in this episode of Marco Codes. What’s in the Video 00:00 Intro In the previous episode, we did a code review of a pull request, but due to the way it was edited, there was a lot of missing context. We will try and add context in this episode and look at a variety of factors involved in code reviews. 01:07 What Is a Code Review? Even though a lot of people seemingly agree on what a code review is, they differ from team to team and company to company. We will learn about code reviews as happening on a spectrum, from very conservative styles such as in the Linux Kernel to more laissez-faire styles, where people superficially review an insane amount of code changes. 02:48 Levels Code reviews differ depending on the actual skill level of the people involved in them. Is the reviewee junior, or the reviewer senior? Are two seniors reviewing each other? We'll have a look at how feedback will differ depending on these different levels. 05:26 Ego Ego is a topic involved in every review and it should be kept out of them as much as possible. Again, this goes both ways. The reviewer shouldn't approach the review with an "I know everything better"-attitude, and the review shouldn't see comments as personal attacks, but rather as a chance to learn something. 06:13 Philosophy What is the general code review philosophy in the company? Is it merely about reasoning about edge cases, or is it more of a code review++ where the reviewer is expected to deeply reason about every proposed code change? 07:19 Project Type Is it a public project? Maybe an open-source project on GitHub? Or is it a commercial project, that you work on together with a close-knit team? Depending on the project type, it is or isn't possible to reject pull requests and hence your review style will also differ. 08:24 Location Can you quickly walk into another room to sit together with the reviewee and discuss code changes together? Or do you have to play comment ping pong through a web application? This will significantly affect your code review style. 08:59 Time Most importantly, is the company you are working for, willing to allow the time needed to do proper reviews? 10:38 What We Will Review In this segment, we will have a quick recap of the original problem that was solved through the pull request for my Google Photos Clone. 11:51 Review Style I will elaborate on the style used for this review, taking into account all the variables mentioned earlier. 13:02 Inspecting a Pull Request Time to do the actual code review. Let's fire up an editor and see how we can specifically review a pull request. 19:04 Giving Feedback As a result of our code review, we have different possibilities to provide feedback to the reviewee. Let's talk about 2,5 ways that make sense for this review! 22:44 Outro What are your thoughts on code reviews? How did you do them in the past? Let me know!
API Gateway is the AWS service that allows interfacing an application's back-end with its front-end. The figure below shows an example of such an application, consisting of a web/mobile-based front-end and a back-end residing in a REST API, implemented as a set of serverless Lambda functions, as well as a number of legacy services. The figure above illustrates the so-called design pattern Legacy API Proxy, as described by Peter Sbarski, Yan Cui, and Ajay Nair in their excellent book Serverless Architectures on AWS (Manning, 2022). This pattern refers to a use case where Amazon API Gateway and Lambda are employed together, in order to create a new API layer over legacy APIs and services, such that to adapt and reuse them. In this design, the API Gateway exposes a REST interface invoking Lambda functions which, in turn, modify the requests and the responses or transform data to legacy-specific formats. This way, legacy services may be consumed by modern clients that don't support older protocols. This can be done, of course, using the AWS Console, by selecting the API Gateway service and, on the behalf of the proposed GUI (Graphical User Interface), by browsing among the dozens of possible options such that, about one hour later, to come to a functional skeleton. And when our API specifications are changing, i.e., several times per month, we need to start again, from the beginning. We shall not proceed accordingly. We will rather adopt an IaC (Infrastructure as Code) approach consisting in defining our API in a repeatable and deterministic manner. This could be done in several ways, via a script-based automation process using, for example, AWS CLI (Command Line Interpreter), CloudFormation, or Terraform. But there is another interesting alternative that most developers prefer: OpenAPI. And it's this alternative that we chose to use here, as shown further. Designing the REST Interface With OpenAPI In 2011, SmartBear Software, a small company specializing in testing and monitoring tools, developed Swagger, a set of utilities dedicated to the creation and documentation of RESTful services. Several years later in November 2015 under the auspices of the Linux Foundation, this same company was announcing the creation of a new organization, named OpenAPI Initiative. Other majors, like Google, IBM, etc., got committed as founding members. In January 2016, Swagger changed its name and became OpenAPI. OpenAPI is a formalism based on the YAML notation, which could also be expressed in JSON. It aims at defining REST APIs in a language-agnostic manner. There are currently a lot of tools around OpenAPI and our goal here isn't to extensively look at all the possibilities which are open to us, as far as these tools and their utilization is concerned. One of the most common use cases is probably to login to the SwaggerHub online service, create a new API project, export the resulted YAML file, and use it in conjunction with the SAM (Serverless Application Model) tool in order to expose the given API via Amazon API Gateway. And since we need to illustrate the modus operandi described above, let's consider the use case of a money transfer service, named send-money. This service, as its name clearly shows it, is responsible to perform bank account transfers. It exposes a REST API whose specifications are presented in the table below: Resource HTTP Request Action Java Class /orders GET Get the full list of the currently registered orders GetMoneyTransferOrders /orders POST Create a new money transfer order CreateMoneyTransferOrder /orders PUT Update an existing money transfer order UpdateMoneyTransferOrder /orders/{ref} GET Get the money transfer order identified by itsreference passed as an argument GetMoneyTransferOrder /orders/{ref} DELETE Remove the money transfer order identified by its reference passed as an argument RemoveMoneyTransferOrder This simple use case, consisting of a CRUD (Create, Read, Update, Delete) and exposed as a REST API, is the one that we chose to implement here, such that to illustrate the scenario described above and here are the required steps: Go to the Send Money API on SwaggerHub. Here you'll find an already prepared project showing the OpenAPI specification of the REST API defined in the table above. This is a public project and, in order to get access, one doesn't need to register and log in. You'll be presented with a screen similar to the one in the figure below: This screen shows in its left pane the OpenAPI description of our API. Once again, the full explanation of the OpenAPI notation is out of our scope here, as this topic might make the subject of an entire book, like the excellent one of Joshua S. Ponelat and Lukas L. Rosenstock, titled Designing APIs with Swagger and OpenAPI (Manning 2022). The right pane of the screen presents schematically the HTTP requests of our API and allows, among others, to test it. You may spend some time browsing in this part of the screen, by clicking the button labeled with an HTTP request and then selecting Try it out. Notice that these tests are simulated, of course, as there is no concrete implementation behind them. However, they allow you to make sure that the API is correctly defined, from a syntactical and a semantic point of view. Now that you finished playing with the test interface, you can use the Export -> Download API -> YAML Resolved function located in the screen's rightmost upper corner to download our API OpenAPI definition in YAML format. In fact, you don't really have to do that because you can find this same file in the Maven project used to exemplify this blog ticket. Let's have now a quick look at this YAML file. The first thing we notice is the declaration openapi: which defines the version of the notation that we're using: in this case, 3.0.0. The section labeled info identifies general information like the API name, its author, and the associated contact details, etc. The next element, labeled servers: defines the auto-mocking function. It allows us to run the simulated tests outside the SwagerHub site. Just copy the URL declared here and use it with your preferred browser. Last but not least, we have the element labeled paths: where our API endpoints are defined. There are two such endpoints: /orders and /orders/{ref}. For each one, we define the associated HTTP requests, their parameters as well as the responses, including the HTTP headers. OpenAPI is an agnostic notation and, consequently, it isn't bound to any specific technology, framework, or programming language. However, AWS-specific extensions are available. One of these extensions is x-amazon-apigateway-integration which allows a REST endpoint to connect to the API Gateway. As you can see looking at the OpenAPI YAML definition, each endpoint includes an element labeled x-amazon-apigateway-integration which declares, among others, the URL of the Lambda function where the call will be forwarded. The Project Ok, we have an OpenAPI specification of our API. In order to generate an API Gateway stack out of it and deploy it on AWS, we will use SAM, as explained above. For more details on SAM and how to use it, please don't hesitate to have a look here. Our Java project containing all the required elements may be found here. Once you cloned it from GitHub, open the file template.yaml. We reproduce it below: YAML AWSTemplateFormatVersion: '2010-09-09' Transform: 'AWS::Serverless-2016-10-31' Description: Send Money SAM Template Globals: Function: Runtime: java11 MemorySize: 512 Timeout: 10 Tracing: Active Parameters: BucketName: Type: String Description: The name of the S3 bucket in which the OpenAPI specification is stored Resources: SendMoneyRestAPI: Type: AWS::Serverless::Api Properties: Name: send-money-api StageName: dev DefinitionBody: Fn::Transform: Name: AWS::Include Parameters: Location: Fn::Join: - '' - - 's3://' - Ref: BucketName - '/openapi.yaml' MoneyTransferOrderFunction: Type: AWS::Serverless::Function Properties: FunctionName: MoneyTransferOrderFunction CodeUri: send-money-lambda/target/send-money.jar Handler: fr.simplex_software.aws.lambda.send_money.functions.MoneyTransferOrder::handleRequest Events: GetAll: Type: Api Properties: RestApiId: Ref: SendMoneyRestAPI Path: /orders Method: GET Get: Type: Api Properties: RestApiId: Ref: SendMoneyRestAPI Path: /orders Method: GET Create: Type: Api Properties: RestApiId: Ref: SendMoneyRestAPI Path: /orders Method: POST Update: Type: Api Properties: RestApiId: Ref: SendMoneyRestAPI Path: /orders Method: PUT Delete: Type: Api Properties: RestApiId: Ref: SendMoneyRestAPI Path: /orders Method: DELETE ConfigLambdaPermissionForMoneyTransferOrderFunction: Type: "AWS::Lambda::Permission" DependsOn: - SendMoneyRestAPI Properties: Action: lambda:InvokeFunction FunctionName: !Ref MoneyTransferOrderFunction Principal: apigateway.amazonaws.com Our template.yaml file will create an AWS CloudFormation stack containing an API Gateway. This API Gateway will be generated from the OpenAPI specification that we just discussed. The DefinitionBody element in the SendMoneyAPI resource says that the API's endpoints are described by the file named openapi.yaml located in an S3 bucket, which name is passed as an input parameter. The idea here is that we need to create a new S3 bucket, copy into it our OpenAPI specifications in the form of an yaml file, and use this bucket as an input source for the AWS CloudFormation stack containing the API Gateway. A Lambda function, named MoneyTransferOrderFunction, is defined in this same SAM template as well. The CodeUri parameter configures the location of the Java archive which contains the associated code, while the Handler one declares the name of the Java method implementing the AWS Lambda Request Handler. Last but not least, the Event paragraph sets the HTTP requests that our Lambda function is serving. As you can see, there are 5 endpoints, labeled as follows (each defined in the OpenAPI specification): GetAll mapped to the GET /orders operation Get mapped to the GET /orders/{ref} operation Create mapped to the POST /orders operation Update mapped to the PUT /orders operation Delete mapped to the DELETE /orders/{ref} operation To build and deploy the project, proceed as shown in the listing below: Shell $ mkdir test-aws $ cd test-aws $ git clone https://github.com/nicolasduminil/aws-showcase ... $mvn package ... $ ./deploy.sh ... make_bucket: bucketname-3454 upload: ./open-api.yaml to s3://bucketname-3454/openapi.yaml Uploading to 73e5d262c96743505970ad88159b929b 2938384 / 2938384 (100.00%) Deploying with following values =============================== Stack name : money-transfer-stack Region : eu-west-3 Confirm changeset : False Disable rollback : False Deployment s3 bucket : bucketname-3454 Capabilities : ["CAPABILITY_IAM"] Parameter overrides : {"BucketName": "bucketname-3454"} Signing Profiles : {} Initiating deployment ===================== Uploading to b0cf548da696c5a94419a83c5088de48.template 2350 / 2350 (100.00%) Waiting for changeset to be created.. CloudFormation stack changeset ... Successfully created/updated stack - money-transfer-stack in eu-west-3 Your API with ID mtr6ryktjk is deployed and ready to be tested at https://mtr6ryktjk.execute-api.eu-west-3.amazonaws.com/dev In this listing, we start by cloning the Git repository containing the project. Then, we execute a Maven build, which will package the Java archive named send-money-lambda.jar, after having performed some unit tests. The script deploy.sh, like its name implies, is effectively responsible to fulfill the deployment operation. Its code is reproduced below: Shell #!/bin/bash RANDOM=$$ BUCKET_NAME=bucketname-$RANDOM STAGE_NAME=dev AWS_REGION=$(aws configure list | grep region | awk '{print $2}') aws s3 mb s3://$BUCKET_NAME echo $BUCKET_NAME > bucket-name.txt aws s3 cp open-api.yaml s3://$BUCKET_NAME/openapi.yaml sam deploy --s3-bucket $BUCKET_NAME --stack-name money-transfer-stack --capabilities CAPABILITY_IAM --parameter-overrides BucketName=$BUCKET_NAME aws cloudformation wait stack-create-complete --stack-name money-transfer-stack API_ID=$(aws apigateway get-rest-apis --query "items[?name=='send-money-api'].id" --output text) aws apigateway create-deployment --rest-api-id $API_ID --stage-name $STAGE_NAME >/dev/null 2>&1 echo "Your API with ID $API_ID is deployed and ready to be tested at https://$API_ID.execute-api.$AWS_REGION.amazonaws.com/$STAGE_NAME" We're using here the $$ Linux command which generates a random number. By appending this randomly generated number to the S3 bucket name that will be used in order to store the OpenAPI specification file, we satisfy its region-wide uniqueness condition. This bucket name is further stored in a local file, such that it can be later retrieved and cleaned up. Notice also the aws configure command used in order to get the current AWS region. The command aws s3 mb is creating the S3 bucket. Here mb states for make bucket. Once the bucket is created, we'll be using it in order to store inside the open-api.yaml file, containing the API specifications. This is done on the behalf of the command aws s3 cp. Now, we are ready to start the deployment process. This is done through the sam deploy command. Since this operation might take a while, we need to wait until the AWS CloudFormation stack is completely created before continuing. This is done by the statement aws cloudformation wait, as shown in the listing above. The last operation is the deployment of the previously created API Gateway, done by running the aws apigateway create-deployment command. Here we need to pass, as an input parameter, the API Gateway identifier, retrieved on the behalf of the command aws apigateway get-rest-api, which returns information about all the current API Gateways. Then, using the --query option, we filter among the JSON payload, in order to find ours, named send-money-api. At the end of its execution, the script displays the URL of the newly created API Gateways. This is the URL that can be used for testing purposes. For example, you may use Postman, if you have it installed, or simply the AWS Console, which benefits a nice and intuitive test interface. If you decide to use the AWS Console, you need to select the API Gateway service and you'll be presented with the list of all current existent ones. Clicking on the one named send-money-api will display the list of the endpoint to be tested. For that, you need to start, of course, by creating a new money transfer order. You can do this by pasting the JSON payload below in the request body: JSON { "amount": 200, "reference": "reference", "sourceAccount": { "accountID": "accountId", "accountNumber": "accountNumber", "accountType": "CHECKING", "bank": { "bankAddresses": [ { "cityName": "poBox", "countryName": "countryName", "poBox": "cityName", "streetName": "streetName", "streetNumber": "10", "zipCode": "zipCode" } ], "bankName": "bankName" }, "sortCode": "sortCode", "transCode": "transCode" }, "targetAccount": { "accountID": "accountId", "accountNumber": "accountNumber", "accountType": "CHECKING", "bank": { "bankAddresses": [ { "cityName": "poBox", "countryName": "countryName", "poBox": "cityName", "streetName": "streetName", "streetNumber": "10", "zipCode": "zipCode" } ], "bankName": "bankName" }, "sortCode": "sortCode", "transCode": "transCode" } } If the status code appearing in the AWS Console is 200, then the operation has succeeded and now you can test the two GET operations, the one retrieving all the existent money transfer orders and the one getting the money transfer order identified by its reference. For this last one, you need to initialize the input parameter of the HTTP GET request with the value of the money transfer order reference which, in our test, is simply "reference". In order to test the PUT operation, just paste in its body the same JSON payload used previously to test the POST, and slightly modify it. For example, modify the amount to 500 instead of 200. Test again now the two GET operations and they should retrieve a newly updated money transfer order, this time having an amount of 500. When you finished playing with the AWS Console interface, test the DELETE operation and paste the same reference in its input parameter. After that, the two GET operations should return an empty result set. If you're tired to use the AWS Console, you can switch to the provided integration test. First, you need to open the FunctionsIT class in the send-money-lambda Maven module. Here, you need to make sure that the static constant named AWS_GATEWAY_URL matches the URL displayed by the deploy.sh script. Then compile and run the integration tests as follows: Shell mvn test-compile failsafe:integration-test You should see statistics showing that all the integration tests have succeeded. Have fun!
A blue-green deployment model is a software delivery release strategy based on maintaining two separate application environments. The existing production environment running the current production release of the software is called the blue environment, whereas the new version of the software is deployed to the green environment. As part of testing and validation of the new version of the software, application traffic is gradually re-routed to the green environment. If no issues are found, then the green environment becomes the new blue environment. The former blue environment can be taken down, and a new green environment can be established for the next release. Why Is Blue-Green Deployment Useful? The primary benefits of implementing a blue-green strategy are 1) minimal or zero application downtime and 2) no negative impact on end-users when switching users to a new software release or when rolling back a release in the event of unforeseen issues with the new release or deployment. The concepts and components required to implement blue-green deployments include but are not limited to, load balancers, routing rules, and container orchestration platforms like Kubernetes. How Blue-Green Deployment Works As shown in the image, let’s assume that version 1 is the current version of the application, and we want to move to the new update, version 1.1. Version 1 will be called the blue environment, and version 1.1 will be called the green environment. The Process of Switching Traffic Between the Two Environments Now that we have two instances of the application, named blue and green, we want users to access the new green (v 1.1) instance rather than the older blue instance. For this to happen, we normally use a load balancer instead of a DNS record exchange because DNS propagation is not instantaneous. By using load balancers and routers, there is no need to change DNS records because the load balancer references the same DNS record but routes new traffic to the green environment. This gives administrators full control of user access, which is important because it enables quickly switching users back to version 1 (the blue instance) in the event of a failure in the green instance. Because of the speed of the switchover, most users won’t notice that they are now accessing a newer version of the service or application — or that they have been rolled back to a previous version. Monitoring Traffic can be switched from the blue to the green environment gradually or all at once. As the traffic flows to the green instance, the DevOps engineers get a small window of time to run smoke tests on the green instance. This is crucial, as they need to ensure that all aspects of the new version are running as they should before users are impacted on a wide scale. The Benefits of Implementing Blue-Green Deployments Improved user experience — As noted above, users don’t experience any downtime, and the new environment can be rolled back instantly to the previous best state if necessary. Disaster recovery — The Blue-Green strategy is also a best practice for simulating and running disaster recovery scenarios because of the inherent equivalence of the blue and green instances and the ability to instantly failover to the (back-up) green instance in case of an issue with the (production) blue instance. Simulating actual production scenarios — With a Canary deployment, the testing environment is often not identical to the final production environment. Instead, we use a small portion of the production environment and move a small amount of traffic to the new system. (Read more about Canary Analysis here.) By contrast, in a Blue-Green deployment, the new green instance can simulate the entire production environment running in the blue instance. Increasing developer productivity — Gone are the days when DevOps engineers had to wait for low-traffic windows to deploy updates. The Blue-Green strategy eliminates the need for maintaining downtime schedules, and developers can quickly move their updates into production as soon as they are ready with their code. Best Practices and Challenges To Keep In Mind When Implementing a Blue-Green Deployment Choose Load Balancing Over DNS Switching Do not use multiple domains to switch between servers. This is a very old way of diverting traffic. DNS propagation takes from hours to days, and it can take browsers a long time to get the new IP address. Some of your users may still be served by the old environment. Instead, use load balancing. Load balancers enable you to set your new servers immediately without depending on the DNS. This way, you can ensure that all traffic is served to the new production environment. Keep Databases in Sync One of the biggest challenges of blue-green deployments is keeping databases in sync. Depending on your design, you may be able to feed transactions to both instances to keep the blue instance as a backup when the green instance goes live. Or you may be able to put the application in read-only mode before cut-over, run it for a while in read-only mode, and then switch it to read-write mode. That may be enough to flush out many outstanding issues. Backward compatibility is business critical. Any new users added to the new version must still have access in the event of a rollback. Otherwise, the business could, for instance, lose new customers. In addition, any new data added to the new version must also be passed to the old database in the event of a rollback. Execute a Rolling Update The container architecture has enabled the use of a rolling, or seamless, blue-green update. Containers enable DevOps engineers to perform a blue-green update only on the required pod. This decentralized architecture ensures that other parts of the application do not get affected. Challenges To Consider While Implementing Blue-Green Deployments Errors When Changing User Routing Blue-green is the best choice of deployment strategy in many cases, but it comes with some challenges. One issue is that during the initial switch to the new (green) environment, some sessions may fail, or users may be forced to log back into the application. Similarly, when rolling back to the blue environment in case of an error, users logged in to the green instance may face service issues. With more advanced load balancers, these issues can be overcome by slowing the moving of new traffic from one instance to another. The load balancer can either be programmed to wait for a fixed duration before users become inactive or force-close sessions for the users still connected to the blue instance after the specified time limit. This might slow down the deployment process and result in some failed or stuck transactions for a small fraction of users, but it will provide a far more seamless and uninterrupted service quality than having routers force the exit of all users and divert traffic. Seamless Blue-Green Deployment Instantaneous Blue-Green Deployment High Infrastructure Costs The elephant in the room with blue-green deployments is infrastructure cost. Organizations that adopt a blue-green strategy need to maintain an infrastructure that doubles the size required by their application. If you utilize elastic infrastructure, the cost can be absorbed more easily. Similarly, blue-green deployments can be a good choice for applications that are less hardware intensive. Code Compatibility Lastly, the blue and green instances live in the production environment, so developers need to ensure that each new update is compatible with the previous environment. For example, if a software update requires changes to a database (e.g., adding a new field or column), the blue-green strategy is difficult to implement because, at times, traffic is switched back and forth between the blue and green instances. It should be a mandate to use a database that is compatible across all software updates, as some NoSQL databases are. Conclusion The blue-green software deployment strategy can involve significant costs, but it is one of the most widely used advanced deployment strategies. Blue-green is particularly helpful when you expect environments to remain consistent between releases and you require reliability in user sessions across new releases.
Cloud computing has revolutionized the software industry in the last 10 years. Today, most organizations prefer to host applications and services on the cloud due to ease of deployment, high security, scalability, and cheap maintenance costs over on-premise infrastructure. In 2006, Amazon launched its cloud services platform, Amazon Web Services (AWS), one of the leading cloud providers to date. Currently, AWS offers over 200 cloud services, including cloud hosting, storage, machine learning, and container management. AWS Elastic Container Service (ECS) and AWS Lambda are both Amazon code deployment solutions, each with benefits and use cases. In this article, we will compare AWS ECS vs. AWS Lambda, how each fulfills its function, and which of these is a better choice for your business requirements. What Is AWS ECS? AWS ECS is a container management solution that manages and deploys Docker containers. It treats each container as a task and provides users with the functionality to run, stop and manage them easily using the following components: Task Definition: These define the configurations for a task. Users can apply a single definition to multiple tasks if required. Tasks: An instance of a task definition is called a task. A task can be run standalone or as part of a service. In simple words, a task is a running container. Cluster: Multiple running tasks form a cluster. A cluster can have multiple task definitions applied within them. Essentially, containerization is a deployment technology that uses containers to store the entire application within an image file, including the code, all relevant installations, and operating system (OS) requirements. These files are very lightweight, easily deployable, and build the complete environment for the application to operate. Developers use containers to save themselves from the hassle of dependency issues and make deployments as smooth as possible. Modern microservices-based applications use containers for deployment. However, managing multiple containers becomes challenging, which means most companies opt for AWS ECS to streamline their container management needs. AWS ECS allows developers to deploy their containers using AWS Elastic Compute Cloud (EC2), where the user has to maintain the EC2 infrastructure. AWS ECS and AWS Fargate AWS ECS can also be deployed via AWS Fargate. AWS Fargate is a new compute engine that automates the creation and management of the underlying infrastructure required to run containers. Fargate only requires users to upload the image to be deployed and select the CPU and memory requirements. The ease of deployment makes AWS Fargate a better choice for utilizing AWS ECS. A key point to note is that AWS ECS solves the deployment issue for large-scale applications, but that may not always be what you are looking to manage. What if you need to deploy a small piece of code? Or a function that you need to execute with specific triggers? In this scenario, you can turn to AWS Lambda. What Is AWS Lambda? AWS Lambda is a computing service that allows users to deploy small bits of code in a serverless environment where servers are managed entirely by the Cloud provider behind the scenes. It natively supports Java, Go, PowerShell, Node.js, C#, Python, and Ruby and provides a Runtime API that allows users to use any additional programming language. Functions defined in AWS Lambda run in an isolated environment, each having its separate resources and file system view. These functions are bound to triggers and are executed when certain events occur. For example, you can create a pre-processing function for images and store it on AWS Lambda. Now, whenever an image file is uploaded to the AWS S3 bucket, the function will be triggered, and the image will run through the algorithm before being uploaded. Other AWS Lambda triggers include: Insert, update, and delete data Dynamo DB table Modifications to objects in S3 buckets Notifications sent from Amazon Simple Notification Service (SNS). Another essential point to note is that AWS Lambda functions are executed in containers, which further helps with the isolation and security of the code. The appropriate runtime environment (Python, Node.js) is initialized within a container during execution. Once the execution is complete, the container is paused and resumed only for a subsequent call. The AWS runtime deletes the container if no call is made during a specific period, and after that, a new container must be initialized. AWS ECS vs. AWS Lambda: What Are the Differences? Both services help clients deploy applications and code, but both have very different use cases. It is important to explore these differences to understand which of the two best suits your requirements. The table below outlines a side-by-side comparison of AWS ECS vs. AWS Lambda: AWS ECS AWS Lambda High-performing and scalable container management service A function-executing service that runs in response to triggers, powered by a serverless environment. Only works with containers. All you need to do is point to a container registry with your Docker image and the rest is managed by the service. Only requires you to write the code. Currently, AWS Lambda supports Python, NodeJS, Java, Ruby, GO, C#, and Powershell. Used for running Docker containers and deploying entire enterprise-scale applications. Used for a small application built with a few lines of code. The tasks can be run for a long time and the task count can be scaled by integration with Amazon CloudWatch alarms. The Lambda function execution time is limited to 15 minutes. The running EC2 Clusters are charged by the hour, which means they are more costly. AWS Fargate costs start at about $0.04 vCPU per hour. AWS Lambda is billed based on the number of requests to the more cost-effective function. There is no clear winner as both services seem to benefit different domains. However, there are a few key takeaways from the comparison: AWS ECS is built to handle large applications and offers scalability, while AWS Lambda enjoys rapid execution of code to perform important runtime tasks. Cost is another important factor, and AWS Lambda wins here since you only pay for the processing power used while running Lambda functions. To make an informed decision, you first need to explore your business requirements. AWS ECS vs. AWS Lambda: How To Choose When choosing among the services mentioned above, the following questions can help clarify the decision: What is the size of my application? Large-scale applications would be unmanageable on AWS Lambda; therefore, AWS ECS is the better choice. What is the run time of my application? AWS Lambda limits program execution to 15 mins, so if the application is to run for a longer time, AWS ECS is a better choice. What is my software development and deployment budget? Both services seem a good fit in their respective scenarios, but AWS Lambda’s cheaper pricing structure gives it an edge over AWS ECS. What are my project configuration requirements? As easy as it is, AWS ECS still has more setup requirements than AWS Lambda but it offers greater configuration flexibility. In contrast, AWS Lambda is the better choice if you want a program executed straight away with minimal configuration. These questions should help in making a better decision in choosing a deployment service between AWS ECS vs. AWS Lambda. Conclusion If you use Amazon Web Services, it is likely that you currently use at least one of these core AWS features. The benefits they provide may seem to overlap sometimes. But each service has unique capabilities that you may want to use in some cases and not so much in others. This perspective is reinforced by our detailed AWS ECS vs. AWS Lambda comparison, where it is evident that the former is suitable for large-scale applications while the latter works better when you need rapid execution of code to perform important runtime tasks.
John Vester
Staff Engineer,
Marqeta @JohnJVester
Marija Naumovska
Product Manager,
Microtica
Vishnu Vasudevan
Head of Product Engineering & Management,
Opsera
Seun Matt
Engineering Manager,
Cellulant