

Discover more from Next — Today I Learnt About Data Science
A Time-Series Analysis of my Girlfriend's Mood Swings
Next — Today I Learnt About Data Science | Issue #78
Hi there!
Today, we're going to embark on a fascinating journey that takes us from the humorous application of time-series analysis to predict mood swings, to the serious business of estimating long-term effects from short-run experiments. We'll explore the world of deep RL with OpenAI's Spinning Up guide and a curated list of AI readings from Andreessen Horowitz.
We'll also learn how to build a GitHub Support Bot using GPT3, LangChain, and Python. As we navigate through these stories, we'll introduce you to some intriguing packages, demystify a few jargons, share a couple of insightful tweets, and as always, end with a chuckle-worthy meme.
So, without further ado, let's dive in!
Five Stories
A Time-Series Analysis of my Girlfriend’s Mood Swings
Mood swings… You likely have had them at some point. Even if you’re Buddha of 21st century, you certainly know what I’m talking about.
Ever wondered how to predict your girlfriend's mood swings? Well, one brave soul has ventured into the uncharted territory of applying time-series analysis to this conundrum. Buckle up for a hilarious journey where active listening, date nights, and pillow talk meet Playstation 5 games, exponential smoothing, and machine learning models.
It's not just about surviving the storm, it's about knowing when it's safe for him to play with the boys or even buy a boat! ROFL 🤣
Estimating long-term effects when only short-run experiments are available
The article on Spotify Research discusses a method for estimating long-term effects when only short-run experiments are available. This is a significant challenge in fields like medicine and economics, where long-term outcomes are often different from short-term ones. The authors propose a solution that combines short-term experimental data and long-term observational data, even in the presence of latent confounders.
Their solution is based on instrumental variables and they demonstrate effectiveness of their methods using synthetic data and real data. The method can be applied to any single-stage causal effect.
OpenAI: Spinning Up!
Deep reinforcement learning (RL) has accelerated the field of machine learning significantly. This is OpenAI’s guide on the topic. To begin working in deep RL, one needs to have a background in coding, math and regular deep learning. Additionally, they need to have a high-level view of the topic. Since deep RL textbooks are hard to come by, this guide by OpenAI aims to cover the distance.
And learning to implement deep RL algorithms is typically painful, because either
the paper that publishes an algorithm omits or inadvertently obscures key design details,
or widely-public implementations of an algorithm are hard to read, hiding how the code lines up with the algorithm.
While fantastic repos like garage, Baselines, and rllib make it easier for researchers who are already in the field to make progress, they build algorithms into frameworks in ways that involve many non-obvious choices and trade-offs, which makes them hard to learn from.
AI Canon
AI research has expanded like crazy in last few months. People at Andreessen Horowitz curated a list of readings on AI topics ranging from its history and ethics to its future applications. It contains books, articles, papers, videos, and more.
Some of my picks:
State of GPT: From Karpathy, this is a very approachable explanation of how ChatGPT / GPT models in general work, how to use them, and what directions R&D may take.
AI for full-self driving at Tesla: Another classic Karpathy talk, this time covering the Tesla data collection engine. Starting at 8:35 is one of the great all-time AI rants, explaining why long-tailed problems (in this case stop sign detection) are so hard.
The annotated transformer: In-depth post if you want to understand transformers at a source code level. Requires some knowledge of PyTorch.
The Waluigi Effect: Nominally an explanation of the “Waluigi effect” (i.e., why “alter egos” emerge in LLM behavior), but interesting mostly for its deep dive on the theory of LLM prompting.
Building LLM applications for production: Chip Huyen discusses many of the key challenges in building LLM apps, how to address them, and what types of use cases make the most sense.
Build a GitHub Support Bot with GPT3, LangChain, and Python
The blog post on Dagster's website discusses how to build a GitHub support bot using GPT-3 (not 3.5/4), LangChain, and Python. The Dagster team faced a challenge of providing efficient support to their growing community. They decided to leverage the capabilities of GPT-3 to create a Slack bot that could answer basic technical questions about Dagster.
It covers various aspects of this process, including the decision not to fine-tune GPT-3, constructing the prompt with LangChain, dealing with large documents and limited prompt window size, and applying these techniques to a GitHub repo. The post also discusses caching the embeddings with Dagster to save time and money, building the pipeline with Dagster, and future work.
Four Packages
LangChain is a decentralized platform for natural language processing and machine translation powered by blockchain and smart contracts. Github.
MusicLM is Google’s new state-of-the-art model for music generation using attention networks in Pytorch. It can generate realistic and diverse musical sequences from a given prompt or style. Github.
Detectron2 is a framework for building high-performance object detection and segmentation models using Pytorch. It supports a wide range of tasks and architectures, and provides easy-to-use APIs and tools. Github.
OpenAI Jukebox is a neural network that can generate music in various genres and styles, including lyrics and vocals, from scratch or conditioned on an artist or genre. Github. (Their explorer is amazing!)
Three Jargons
Breidbart Index measures the severity of spam invented by long-time hacker Seth Breidbart and is used for programming cancelbots. It takes into account the fact that excessive multi-posting is worse than excessive cross-posting. It is computed as follows: For each article in a spam, take the square-root of the number of newsgroups to which the article is posted. The Breidbart Index is the sum of the square roots of all of the posts in the spam. For example, one article posted to nine newsgroups and again to sixteen would have BI = sqrt(9) + sqrt(16) = 7. It is generally agreed that a spam is cancelable if the Breidbart Index exceeds 20.
Brook's Law states that the expected advantage from splitting development work among N programmers is O(N) (that is, proportional to N). Still, the complexity and communications cost associated with coordinating and merging their work is O(N^2) (that is, proportional to the square of N). It is frequently summarised as "Adding manpower to a late software project makes it later".
Second system effect: When designing the successor to a relatively small, elegant, and successful system, one tends to become grandiose in one's success and create an elephantine feature-laden monstrosity.
Two Tweets
https://twitter.com/Mappletons/status/1250532315459194880
https://twitter.com/chrisalbon/status/1676309875025121280
One Meme
https://twitter.com/miniapeur/status/1676405579412021248
— Harsh