Disclaimer: This content reflects my personal opinions, not those of any organizations I am or have been affiliated with. Code samples are provided for illustration purposes only, use with caution and test thoroughly before deployment.
I recently worked on a project to migrate a CDK project to Terraform, because the client want to standarize on Terrafrom. This could have been a time-consuming job, and it requires expertise on both CDK and Terraform, but with the help from generative AI, especailly Amazon Q Developer and Amazon Bedrock, it becomes quite easy. This article will walk you through how I performed the migration and the lessons learned through the process.
My Linux laptop is running the old Ubuntu 20.04 and is going to go out of support next year. I was planning to switch over to NixOS but I don’t have time right now to do a fresh reinstall and learn NixOS from scratch. That’s why I decided to simply upgrade to Ubuntu 24.04 and switch to Wayland.
I was using i3 on X11, so switching to Wayland means I have to change many of my settings and switch to utilities that supports Wayland. This post is a rundonw of all the changes I’ve made to switch to Wayland. Overall, I enjoy the smoothness of Wayalnd (abiet barely noticable), and being able to use newer, more polished utiltity tools.
(To view a larger version of a screenshot, right-click on the image and select Open Image in New Tab.)
By default, when you use JupyterLab in Amazon SageMaker Studio, you’ll see some Python code being highlighted with pycodestyle syntax check error. This can get distracting if you don’t care about them or have the checks in the CI/CD pipeline already.
Recently, I’ve been working on a project that requires running thousands of models simultaneously. To save costs, we decided to run it on a SageMaker Multi-Model endpoint.
Here is the official definition of Multi-Modal Endpoint from the official AWS Documentation:
Multi-model endpoints provide a scalable and cost-effective solution to deploying large numbers of models. They use the same fleet of resources and a shared serving container to host all of your models. This reduces hosting costs by improving endpoint utilization compared with using single-model endpoints. It also reduces deployment overhead because Amazon SageMaker manages loading models in memory and scaling them based on the traffic patterns to your endpoint.
Some example use cases include:
House price estimation models for different cities
Machine anomaly detection algorithms for different machine configurations
These use cases have many models, with the same model algorithm and framework but trained on different dataset.
A key questions aries: “How many models can we fit into one instance, and what instance type do we need?”. This post demonstrates my experiment results to answer this quesetion.
Triggering AWS code pipeline when new files are uploaded to S3 is a very common use case. For example, when new data is uploaded, you can trigger a CodePipeline that triggers SageMaker model retraining or inference. However, the documentation and services involved in this process have gone through multiple updates, making it confusing for users to understand the current recommended approach. In this post, I will try to untangle the different options and let you know which one is the most up-to-date and recommended approach.