Hi, my name’s Andrew Chang and I like to build things.
My experience in engineering involves AI/ML, deep learning, natural language processing, and MLOps.
I enjoy building out full stack applications on distributed systems with tools like Python, Docker, and Kubernetes.
I am the creator and maintainer of packages and repositories including AdaptNLP, Godel, GoldNLP, ML Docker images, and other tools/services.
Interests. Open Source, MLOps, Natural Language Processing, Accelerated Computing
Projects
-
American Data Science
[blog]
Simple Jupyter Lab and Hub management platform with pre-configured environment runtimes supporting a variety of tools, libraries, and GPUs.
-
Fast Neural Networks (FastNN)
[code]
A framework for deploying serializable and optimizable neural net models at scale in production via. the NVIDIA Triton Inference Server.
-
AdaptNLP
[code]
[blog]
A high level framework and library for running, training, and deploying state-of-the-art Natural Language Processing (NLP) models for end to end tasks.
Built atop Zalando Research’s Flair and Hugging Face’s Transformers library, AdaptNLP provides Machine Learning Researchers and Scientists a modular and adaptive
approach to a variety of NLP tasks with an Easy API for training, inference, and deploying NLP-based microservices.
-
Automated Learning Rate Suggester
A learning rate finder for the 1cycle learning policy. Automates the selection of a recommended learning rate in FastAI for each step of training with an interval slide rule technique
-
ML Docker Images
Docker images for ML/AI applications with containers ranging from CUDA-compatible jupyter servers and environments to Kops/k8s plugins for CI/CD workflows.
Docker images for AdaptNLP are also available here
Conferences and Workshops
2020