Latent Space
Deep technical AI engineering content. The go-to podcast for AI builders.
183 episodes curated
Episodes
Emergency Pod: OpenAI's new Functions API, 75% Price Drop, 4x Context Length (w/ Alex Volkov, Simon Willison, Riley Goodside, Joshua Lochner, Stefania Druga, Eric Elliott, Mayo Oshin et al)
Full Transcript and show notes: https://www.latent.space/p/function-agents?sd=pf Timestamps: [00:00:00] Intro [00:01:47] Recapping June 2023 Updates [00:06:24] Known Issues with Long Context [00:08:00] New Functions API [00:10:45] Riley Goodside [00:12:28] Simon Willison [00:14:30] Eric Elliott [00:16:05] Functions API and Agents [00:18:25] Functions API vs Google Vertex JSON [00:21:32] From English back to Code [00:26:14] Embedding Price Drop and Pinecone Perspective [00:30:39] Xenova and Huggingface Perspective [00:34:23] Function Selection [00:39:58] Designing Code Agents with Function API
From RLHF to RLHB: The Case for Learning from Human Behavior - with Jeffrey Wang and Joe Reeve of Amplitude
Welcome to the almost 3k latent space explorers that joined us last month! We’re holding our first SF listener meetup with Practical AI next Monday; join us if you want to meet past guests and put faces to voices! All events are in /community . Who among you regularly click the ubiquitous 👍 /👎 buttons in ChatGPT/Bard/etc? Anyone? I don’t see any hands up. OpenAI has told us how important reinforcement learning from human feedback (RLHF) is to creating the magic that is ChatGPT, but we know from our conversation with Databricks’ Mike Conover just how hard it is to get just 15,000 pieces of ex
Building the AI × UX Scenius — with Linus Lee of Notion AI
Read: https://www.latent.space/p/ai-interfaces-and-notion Show Notes * Linus on Twitter * Linus’ personal blog * Notion * Notion AI * Notion Projects * AI UX Meetup Recap Timestamps * [00:03:30] Starting the AI / UX community * [00:10:01] Most knowledge work is not text generation * [00:16:21] Finding the right constraints and interface for AI * [00:19:06] Linus' journey to working at Notion * [00:23:29] The importance of notations and interfaces * [00:26:07] Setting interface defaults and standards * [00:32:36] The challenges of designing AI agents * [00:39:43] Notion deep dive: “Blocks”, AI,
Debugging the Internet with AI agents – with Itamar Friedman of Codium AI and AutoGPT
We are hosting the AI World’s Fair in San Francisco on June 8th! You can RSVP here . Come meet fellow builders, see amazing AI tech showcases at different booths around the venue, all mixed with elements of traditional fairs: live music, drinks, games, and food! We are also at Amplitude’s AI x Product Hackathon and are hosting our first joint Latent Space + Practical AI Podcast Listener Meetup next month! We are honored by the rave reviews for our last episode with MosaicML! They are also welcome on Apple Podcasts and Twitter/HN/LinkedIn/Mastodon etc! We recently spent a wonderful week with It
MPT-7B and The Beginning of Context=Infinity — with Jonathan Frankle and Abhinav Venigalla of MosaicML
We are excited to be the first podcast in the world to release an in-depth interview on the new SOTA in commercially licensed open source models - MosiacML MPT-7B! The Latent Space crew will be at the NYC Lux AI Summit next week, and have two meetups in June. As usual, all events are on the Community page ! We are also inviting beta testers for the upcoming AI for Engineers course. See you soon! One of GPT3’s biggest limitations is context length - you can only send it up to 4000 tokens (3k words, 6 pages) before it throws a hard error, requiring you to bring in LangChain and other retrieval t
Guaranteed quality and structure in LLM outputs - with Shreya Rajpal of Guardrails AI
Tomorrow, 5/16, we’re hosting Latent Space Liftoff Day in San Francisco. We have some amazing demos from founders at 5:30pm, and we’ll have an open co-working starting at 2pm. Spaces are limited, so please RSVP here ! One of the biggest criticisms of large language models is their inability to tightly follow requirements without extensive prompt engineering. You might have seen examples of ChatGPT playing a game of chess and making many invalid moves, or adding new pieces to the board. Guardrails AI aims to solve these issues by adding a formalized structure around inference calls, which valid
The AI Founder Gene: Being Early, Building Fast, and Believing in Greatness — with Sharif Shameem of Lexica
Thanks to the over 42,000 latent space explorers who checked out our Replit episode ! We are hosting/attending a couple more events in SF and NYC this month. See you if in town! Lexica.art was introduced to the world 24 hours after the release of Stable Diffusion as a search engine for prompts, gaining instant product-market fit as a world discovering generative AI also found they needed to learn prompting by example. Lexica is now 8 months old, serving 5B image searches/day, and just shipped V3 of Lexica Aperture , their own text-to-image model! Sharif Shameem breaks his podcast hiatus with u
No Moat: Closed AI gets its Open Source wakeup call — ft. Simon Willison
It’s now almost 6 months since Google declared Code Red , and the results — Jeff Dean’s recap of 2022 achievements and a mass exodus of the top research talent that contributed to it in January, Bard’s rushed launch in Feb, a slick video showing Google Workspace AI features and confusing doubly linked blogposts about PaLM API in March, and merging Google Brain and DeepMind in April — have not been inspiring. Google’s internal panic is in full display now with the surfacing of a well written memo , written by software engineer Luke Sernau written in early April, revealing internal distress not
Training a SOTA Code LLM in 1 week and Quantifying the Vibes — with Reza Shabani of Replit
Latent Space is popping off! Welcome to the over 8500 latent space explorers who have joined us. Join us this month at various events in SF and NYC , or start your own! This post spent 22 hours at the top of Hacker News . As announced during their Developer Day celebrating their $100m fundraise following their Google partnership , Replit is now open sourcing its own state of the art code LLM: replit-code-v1-3b ( model card , HF Space ), which beats OpenAI’s Codex model on the industry standard HumanEval benchmark when finetuned on Replit data (despite being 77% smaller) and more importantly pa
Mapping the future of *truly* Open Models and Training Dolly for $30 — with Mike Conover of Databricks
The race is on for the first fully GPT3/4-equivalent, truly open source Foundation Model! LLaMA’s release proved that a great model could be released and run on consumer-grade hardware (see llama.cpp ), but its research license prohibits businesses from running it and all it’s variants (Alpaca, Vicuna, Koala, etc) for their own use at work. So there is great interest and desire for *truly* open source LLMs that are feasible for commercial use (with far better customization, finetuning, and privacy than the closed source LLM APIs). The previous leading contenders were Eleuther’s GPT-J and Neo o
AI-powered Search for the Enterprise — with Deedy Das of Glean
The most recent YCombinator W23 batch graduated 59 companies building with Generative AI for everything from sales, support, engineering, data, and more: Many of these B2B startups will be seeking to establish an AI foothold in the enterprise. As they look to recent success, they will find Glean, started in 2019 by a group of ex-Googlers to finally solve AI-enabled enterprise search. In 2022 Sequoia led their Series C at a $1b valuation and Glean have just refreshed their website touting new logos across Databricks, Canva, Confluent, Duolingo, Samsara, and more in the Fortune 50 and announcing
Segment Anything Model and the Hard Problems of Computer Vision — with Joseph Nelson of Roboflow
2023 is the year of Multimodal AI , and Latent Space is going multimodal too! * This podcast comes with a video demo at the 1hr mark and it’s a good excuse to launch our YouTube - please subscribe! * We are also holding two events in San Francisco — the first AI | UX meetup next week (already full; we’ll send a recap here on the newsletter) and Latent Space Liftoff Day on May 4th ( signup here ; but get in touch if you have a high profile launch you’d like to make). * We also joined the Chroma/OpenAI ChatGPT Plugins Hackathon last week where we won the Turing and Replit awards and met some of
AI Fundamentals: Benchmarks 101
We’re trying a new format, inspired by Acquired.fm ! No guests, no news, just highly prepared, in-depth conversation on one topic that will level up your understanding. We aren’t experts, we are learning in public. Please let us know what we got wrong and what you think of this new format! When you ask someone to break down the basic ingredients of a Large Language Model, you’ll often hear a few things: You need lots of data. You need lots of compute. You need models with billions of parameters. Trust the Bitter Lesson , more more more, scale is all you need . Right? Nobody ever mentions the s
Grounded Research: From Google Brain to MLOps to LLMOps — with Shreya Shankar of UC Berkeley
We are excited to feature our first academic on the pod! I first came across Shreya when her tweetstorm of MLOps principles went viral: Shreya’s holistic approach to production grade machine learning has taken her from Stanford to Facebook and Google Brain, being the first ML Engineer at Viaduct, and now a PhD in Databases (trust us, its relevant) at UC Berkeley with the new EPIC Data Lab . If you know Berkeley’s history in turning cutting edge research into gamechanging startups, you should be as excited as we are! Recorded in-person at the beautiful StudioPod studios in San Francisco. Full t
Emergency Pod: ChatGPT's App Store Moment (w/ OpenAI's Logan Kilpatrick, LindyAI's Florent Crivello and Nader Dabit)
This blogpost has been updated since original release to add more links and references. The ChatGPT Plugins announcement today could be viewed as the launch of ChatGPT’s “App Store”, a moment as significant as when Apple opened its App Store for the iPhone in 2008 or when Facebook let developers loose on its Open Graph in 2010. With a dozen lines of simple JSON and a mostly-english prompt to help ChatGPT understand what the plugin does, developers will be able to add extensions to ChatGPT to get information and trigger actions in the real world. OpenAI itself launched with some killer first pa