- Chaos Theory
- Posts
- 🥟 Chao-Down #175 Dall-E 3 comes to ChatGPT, Waymo launches Los Angeles robotaxi tour to residents, Amazon Alexa gets generative AI upgrades, GitHub expands Copilot Chat to individual users
🥟 Chao-Down #175 Dall-E 3 comes to ChatGPT, Waymo launches Los Angeles robotaxi tour to residents, Amazon Alexa gets generative AI upgrades, GitHub expands Copilot Chat to individual users
Plus, Google and Department of Defense build AI microscope to help doctors spot cancer.
Apparently, saying a few magic words in your prompt is enough to make large language models perform better at math.
Google DeepMind researchers have developed a new technique to dramatically improve the math ability of AI language models like ChatGPT. By using other AI models to generate more effective prompting - the written instructions that guide the model - and incorporating human-style encouragement into the prompts, they were able to significantly enhance performance on math tasks.
This Optimization by PROmpting (OPRO) method, described in the recent paper “Large Language Models as Optimizers”, allows language models to solve problems typically requiring traditional math-based optimizers.
Interestingly, in this latest study, DeepMind researchers found "Take a deep breath and work on this problem step by step" to be the most effective prompt when used with Google's PaLM 2 language model.
The results showcase the power of leveraging natural language to guide large language models in complex problem-solving. What set of words have you found to be the most effective in your prompts?
-Alex, your resident Chaos Coordinator.
What happened in AI? 📰
Google, DoD built an AI-powered microscope to help doctors spot cancer (CNBC)
GitHub expands access to Copilot Chat to individual users (TechCrunch)
Waymo begins testing the waters for a robotaxi service in Los Angeles (The Verge)
Amazon’s all-new Alexa voice assistant is coming soon, powered by a new Alexa LLM (The Verge)
OpenAI’s new AI image generator pushes the limits in detail and prompt fidelity (Ars Technica)
Always be Learnin’ 📕 📖
Why Developers and Staff+ Engineers Should Get Involved in Open-Source Collaborative Development (infoq.com)
How Microsoft does Quality Assurance (QA) (The Pragmatic Engineer)
5 Interesting Learnings from Samsara At Almost $1 Billion in ARR (SaaStr)
Projects to Keep an Eye On 🛠
nsbradford/VimGPT: Experimental LLM agent/toolkit with direct Vim access using neovim/pynvim (Github)
OpenPipe/OpenPipe: Turn expensive prompts into cheap fine-tuned models (Github)
The Latest in AI Research 💡
Robustness and Generalizability of Deepfake Detection: A Study with Diffusion Models (arxiv)
Agents: An Open-source Framework for Autonomous Language Agents (arxiv)
Do PLMs Know and Understand Ontological Knowledge? (arxiv)
The World Outside of AI 🌎
‘There is no work to balance’: how shrinking budgets, Covid and AI shook up life in consulting (Financial Times)
Streaming Is Changing the Sound of Music - WSJ
Remote work thrives in the biggest and fastest-growing parts of the U.S. (axios.com)
The cable bundle of the future is officially here - The Verge
Hypertension Patients Double as Many Go Untreated, WHO Finds - Bloomberg
One Last Bite 😋
For those interested in seeing demos of my team’s work, check out this deeper dive walkthrough of Semantic Kernel and Promptflow to help you evaluate your LLM skills and plugins!