Shing Lyu

Disclaimer: This content reflects my personal opinions, not those of any organizations I am or have been affiliated with. Code samples are provided for illustration purposes only, use with caution and test thoroughly before deployment.

Turning Hand-Drawn Architecture Diagrams into Digital Diagrams with Generative AI

Background

As an architect, whiteboarding is an essential part of the design and problem-solving process. However, once the whiteboard session is over, it can be challenging to translate those hand-drawn diagrams into a digital format that can be easily shared and collaborated on. Traditionally, this process involved manually recreating the diagrams using diagramming tools, which can be time-consuming and prone to errors.

My Approach with Generative AI

To streamline this process, I’ve adopted a new approach that leverages the power of generative AI. By combining a multi-model large language model (LLM) from Amazon Bedrock with the versatile diagramming tool draw.io, I can effortlessly convert hand-drawn architecture diagrams into digital diagrams.

(continue reading...)


Streamline my local transcription command for Raycast

In my previous blog, I asked Raycast to create an iTerm2 terminal to run the transcription script. This was because I needed to press CTRL+C to stop the sox recording. However, I found that launching iTerm2 and zsh took a significant amount of time, sometimes up to 5 to 10 seconds, which was too slow for quickly jotting down thoughts.

To address this issue, I discovered a way to stop the sox recording without opening the iTerm terminal. Instead of using CTRL+C, I can send a kill signal to the sox process. To achieve this, I utilize some shell script tricks to save the PID (Process ID) of sox and display a macOS pop-up. When I press the pop-up button, it sends a kill signal to the sox process, effectively stopping the recording.

(continue reading...)


Transcribe voice to text locally with Whisper.cpp and Raycast

Why local transcription?

In today’s world, many of us rely on cloud-based services for tasks like speech-to-text transcription. While these services are convenient, they come with drawbacks like high latency due to network transfers and potential privacy concerns as our audio data is sent to the cloud.

Local transcription addresses these issues by performing the speech recognition process entirely on your device, ensuring low latency and keeping your audio data private. This is where whisper.cpp comes in.

(continue reading...)


Understanding SageMaker Project Template Internals

Authors: Brajendra Singh, Shing Lyu

The blog post is derived from a workshop I built with Brajendra Singh, which was never published. I’m extracting the content to make a blog post. You will learn how to deploy the SageMaker provided MLOps template for model deployment and how the template works internally. If the screenshot is too small, right-click on the image and select Open Image in New Tab

MLOps is the one of the hottest topic in the field right now. Organizations are looking for ways to productionize their ML models, and MLOps is the key to repeatable results. Amazon SageMaker Projects is a feature that allows you to create a full MLOps pipeline in just a few clicks. You are going to create the MLOps pipeline using the SageMaker-provided MLOps template. This template creates the deployment pipeline, and creates a trigger to monitor if new models are approved in the SageMaker model registry, and use that as a signal to deploy it.

Why do you need a MLOps pipeline?

The MLOps pipeline you are going to deploy will help you build a robust foundation for your machine learning experiments. It automates the model deployment and testing processing so there is less room for human error. Once the model is approved in the model registry the model is deployed automatically to the staging endpoint, and an automated test is run against the staging endpoint. This help you catch problems with the model early and prevents you from deploying faulty model to production. You remain in control on what should be deployed to production, thanks to the manual approval step in the pipeline. All the pipeline configuration, CloudFormation templates and test script are managed as code in a CodeCommit repository, so you have repeatable deployment of the pipeline itself. By managing the pipeline as code, you also have better visibility on when and how the pipeline has changed. You can easily rollback any bad configuration. All these benefits gives your data scientists more confidence in experimenting fast and fail fast, because they know that they can easily rollback any failed experiments.

(continue reading...)


Using LLM to get cleaner voice transcriptions

Transcribing natural speech is a challenging task for voice-to-text services. When using traditional transcription tools, speakers are expected to talk slowly and clearly, avoiding fillers like “um” and “uh” as much as possible. However, this puts a huge cognitive load on the speaker to monitor every word before it’s spoken. Suddenly, having a normal conversation becomes an artificial performance.

Recent advances in natural language processing have enabled a better approach - transcribing first, then cleaning up the transcript afterward using a language model. The key insight is that language models can leverage contextual information and an understanding of language semantics to automatically fix many common mistakes and disfluencies in a transcript.

For example, say I’m giving an informal presentation and say:

“The, uh, model, uh no, algorithm, um, is able to…correct itself, uh, when it makes erroneous, uh, predictions…”

A language model could process this rough transcript and output:

“The algorithm is able to correct itself when it makes erroneous predictions.”

(continue reading...)