Questions

Last updated on December 13, 2020

Here's an ongoing list of questions I find interesting. As always, if you have any thoughts on these questions, I'd love to hear from you!

To what extent can transformers be scaled? Richard Sutton's The Bitter Lesson appears to be true, so how soon can we train a transformer 100x larger than GPT-3? Maybe multi-modal models (a model trained on both text and image data) are the key to a more human-like understanding of the world? Given scaling performance, transformers are shaping to be one of the most important ML breakthroughs in decades, so what will a 100 trillion parameter multi-modal model be capable of, and what are the implications?

How soon will von Neumann probes (self-replicating spacecrafts) be feasible? Given the vastness of space, this seems to be our best bet at exploring the universe. Mining asteroids and manufacturing new machinery from raw materials is extremely challenging. In addition, the uncertainty of space means the control unit must be able to make decisions in unexpected situations. Maybe a sufficiently capable transformer model could serve as each spacecraft's control unit?

In what ways will generative deep learning make previously hardware-constrained problems solvable? Some examples include Nvidia's RTX real-time ray tracing, Google Pixel's Night Sight mode, and using GANs to stream higher quality video over less bandwidth. How soon will generative models make every iPhone camera as visually stunning as an $80,000 RED camera?

What will the gene editing mean for the average person? Companies like 23andMe and Promethease can let anyone access their genetic code. How soon will we be able to modify that code and how can we speed that up? When will I be able to get custom drugs tailored to my genetic profile?

How can we get more students to read biographies of ambitious people? Many successful people today all seem to have independently come to the same realization that the titans of industry we look up to aren't all that different from ourselves. What can we do to get more people to come to that realization? What does a world look like where everyone believe's they're as capable as Einstein? What might a school curriculum optimized for self-efficacy look like?

How soon until we have the first popular virtual reality MMOSG (massively multiplayer online simulation game)? Will it be a single location like Ready Player One's OASIS or Snow Crash's Metaverse, or will it have multiple independent worlds like Minecraft servers today?

What is the best encoding for human thought? Is it written language? Will Neuralink actually be useful? Currently the only way to encode our thoughts is writing or speaking, however that can be incredibly lossy. Could we represent an idea as an n-dimensional vector? Or perhaps a series of electrical signals we could somehow replay in our brains?

Why is most software so boring? We understand why games are fun to play, so why aren't game mechanics be more widely used in software? We often play games for hours straight in a state of flow unaware of time passing us by. Wouldn't it be in the best interest of every product team to create value not just along the utility dimension, but also along the fun dimension too? Perhaps effective game design is already challenging, even for experienced game designers, so asking an already constrained team to do build fun into their product is asking for too much.

How can we develop a non-sparse reward function for solving programming problems? A better reward function for writing code means that we can train reinforcement learning agents to make valuable software for us.