Articles on Ai

Last updated: 2022/12/15

Top deep-dives on Ai

Pair Programming with AI: Writing a Distributed, Fault-Tolerant Redis Client using ChatGPT

Sailesh Mukil uses ChatGPT to write a Redis client, documenting all of the prompts and outputs along the way.
Some highlights:

  • ChatGPT made a working Redis client
  • ChatGPT has a good understanding of technical jargon
  • ChatGPT has the capability to translate code it has written into many different languages with a simple prompt (allegedly, since it only did it partially at the end of the article)

DeepETA: How Uber Predicts Arrival Times Using Deep Learning

Xinyu Hu, Olcay Cirit, Tanmay Binaykiya, and Ramit Hora present how Uber developed a "low-latency deep neural network architecture for global ETA prediction", focusing on "learnings and design choices".

Steven Pinker and I debate AI scaling!

Steven Pinker and Scott Aaronson discuss the future of the GPT-n language models.

How should we evaluate progress in AI?

A little bit old, but more relevant than ever with the recent explosion in the AI as a service industry. David Chapman discusses how there is no one agreed-upon set of criteria for what counts as progress in the field of artificial intelligence.
Some highlights:

  • AI has always borrowed criteria, approaches, and specific methods from at least six fields: science, engineering, mathematics, philosophy, design, and spectacle
  • Because the criteria are incommensurable, they suggest divergent directions for research and produce sharp disagreements about what methods to apply
  • "Meta-rationality means figuring out how to use technical rationality in specific situations"

Implicit Bayesian Inference in Large Language Models

Ferenc Huszár uses a paper by Sang Michael Xie, Aditi Raghunathan, Percy Liang and Tengyu Ma as a basis for a discussion on how large language models kind of learn to learn.

How Might Generative AI Change Programming?

Laurence Tratt elaborates on how generative AI can be used for programming.
Some highlights:

  • "Programming turns specifications into software"
  • "Programming is unforgiving of approximations", which makes extra work if models produce the incorrect output
  • The generative models can be useful for some techniques, like fuzzing

Accelerating Queries over Unstructured Data with ML, Part 5 (Semantic Indexes for Machine Learning-based Queries over Unstructured Data)

Daniel Kang, John Guibas, Peter Bailis, Tatsunori Hashimoto, and Matei Zaharia "propose TASTI, a method for constructing indexes for unstructured data" for machine learning.

Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting [pdf]

Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang present a novel implementation of a transformer to help make machine learning models that depend on time-series data better at forecasting.

Want to see more in-depth content?

subscribe to my newsletter!

Other Articles