penAI Chief Product Officer Kevin Weil recently sat down with Varun Mayya and Tanmay Bhat on their YouTube show Overpowered to discuss the rapid advancements in AI, particularly its impact on coding, future job landscapes, and broader AI predictions for 2025.
Below is a Q&A-style breakdown of the insightful conversation, covering coding automation timelines, the role of engineers in an AI-driven world, and billion-dollar startup ideas.
On Coding Automation Predictions
Q: Three years ago, the idea of coding being automated was unthinkable, but now AI models are getting very good at generating functional code for front-end and back-end development.
I recently spoke to someone from Anthropic who predicted that coding would be 99% automated by 2027. Do you have a timeline or thesis on when this might happen, given your work with OpenAI’s research teams and product development?
Kevin: At the rate we’re going, I would be surprised if it’s 2027—I think it’s going to be sooner.
When we launched GPT-4o, it was a very good coding model, powering tools like GitHub Copilot, and people are using it at scale worldwide. But it didn’t compare to the leap we saw with the o1 preview, which was a much better coding model because reasoning matters when writing code.
Whether you’re solving a hard problem, doing a crossword puzzle, or writing code, you need the ability to break down problems, form hypotheses, and validate or refute them. That’s what reasoning enables.
For example, the o1 preview was the millionth-best competitive programmer in the world—not great-sounding, but when you consider there are 30 to 40 million programmers globally, that’s the top 2-3%.
The full o1 launch improved to the thousandth-best engineer in competitive coding contests. Now, the upcoming o3 model, according to the same benchmarks, is the 175th-best competitive coder in the world.
And as we train successor models, they’re already better. I think this is the year that, at least by competitive coding benchmarks, AI becomes better than humans at competitive coding forever—similar to how computers surpassed humans at multiplication 70 years ago or how AI beat humans at chess 15 years ago.
There’s no going back.
We, along with companies like Anthropic and Google, are putting a lot of focus into this, so it’s going to move really fast.
The world will be different because of it, and I think it’ll be a lot better. Imagine all the things you can do if you don’t need to be an engineer to create software.
Software is so powerful because it lets you create almost anything, and this will have a democratizing effect on the world if everyone can create software.
Q: Do you really think engineers having more free time is good for the world? I have engineering friends, and I don’t need their competitive energy channeled into things I like to do!
Kevin: I spend a non-trivial fraction of my day doing stuff I wish I didn’t have to do—whether it’s busy work at the office or filling out forms for my kids’ soccer team.
There’s a big chunk of my day I wish could be automated, and it should be. As an engineer for many years, I’ve experienced moments where some problems are really hard and require all your time and attention, but other times, you know how it’s going to work, and it’s just a lot of typing ahead of you.
That stuff should be automated, and it will be.
On AI in Creative Work and Design
Q: As a content creator, I’m concerned about the rise of AI-generated content. It feels like authenticity is diminishing rapidly—95% of my videos could be generated by someone clicking three buttons. Where do you think this is headed, and is this a concern for OpenAI?
Kevin: Here’s my personal view: with most human creations, there’s a sense of “proof of work” that’s valued.
People value something they know took a lot of time and energy to create, not something you can make in five seconds.
Take Sora, our video model, for example. You can’t just go to Sora today and say, “Make me a movie”—it doesn’t work like that. But if you give it thoughtful, detailed prompts, you can create amazing things.
We have a creator internally who’s worked in the industry, and he can do things with Sora that I can’t believe—they’re well beyond anything I can do, even though we’re using the same tool.
I spoke to a sci-fi director who uses Sora to explore creative options.
For a scene where a plane in outer space zooms into a Death Star-like planet and then cuts to zooming along its surface, he used to go to a special effects studio, pay $100,000, and wait a month for two versions of the cutscene.
With Sora, he can make 40 different versions in an afternoon, letting his creativity run wild and collaborating with the AI. He might still go to the special effects studio for the final version, but now he’s choosing from 50 ideas instead of two, resulting in a much better outcome.
In this world, it doesn’t mean you snap your fingers and everyone creates amazing stuff. It still takes a huge amount of work to do great work. The bar for effort and expertise probably stays the same, but the output with AI can be that much better, more accessible, and faster.
Q: It seems like the future of creative work or coding might turn into management, as this ability to generate content was always available to people with capital through hiring. If you were a creative—whether a coder, content writer, or ad scriptwriter—how would you pivot to this new world, especially when people value intelligence and building things from scratch?
Kevin: Technology is an ever-increasing ladder of abstractions. We move faster because we’re not designing our own circuits, writing machine code, or even assembler anymore.
We’ve moved to C, Python, and web frameworks that handle the underlying operating system, so you’re living in a browser. AI is one more abstraction that helps you move faster.
If you use AI a lot to write code and care more about the product’s output than the code itself, do you know the internals as well as if you’d done it yourself? Probably not.
My dad was an engineer who worked on compilers at Microsoft—do I know the internals of the compiler to the degree he did? Definitely not. But does it matter? I’m writing Python in a web browser and deploying it in real time.
These are tradeoffs. You lose a little bit of that deep understanding, but you gain speed and the ability to bring this capability not just to people who know compilers and Fortran but to anyone who can use ChatGPT.
That’s amazing. It doesn’t preclude you from going deep into the details if you want to—there are still people making chips, and they do an incredible service to the world.
But the majority of people don’t need to learn that level of the stack, and they can operate faster and do more things as a result.
On the Future of Programming and Engineers
Q: If coding, a symbol of intelligence, is getting commoditized, how should engineers starting their careers deal with the incoming status hit? On social media, we see people in denial, grief, and all stages of processing this change. How do you handle the competition when everyone with access to AI can now compete for the same jobs or titles, even if they don’t know the underlying stack as well as you do?
Kevin: What will matter in a world where AI can do so much is not just raw intelligence but expertise and experience—understanding what problems to solve, where to focus your work, and where the leverage is.
For example, Jeff Dean is a better engineer than I am, and I’m willing to bet Jeff Dean plus AI is a better engineer than me plus AI. Expertise still matters, so everything doesn’t completely equalize.
If more people have access to the ability to solve problems, the world will be a better place because more problems will get solved, and society will move ahead faster.
If you can create anything, it puts a huge emphasis on knowing what to create—having a feel for the most important problems to solve and how to go about solving them. I’m excited for that future.
On Whether AI Will Create New Job Categories
Q: Some argue that high competency plus AI will continue to have an advantage, while others say AI will create entirely new job categories. These ideas seem contradictory. What new types of jobs do you see emerging, especially as you work on products at OpenAI?
Kevin: It’s hard to say exactly, and I’m not sure I have an answer that will make you happy, but I do think just about every job will be complemented by AI.
You’ll either use it day in and day out to augment yourself in your job, or, as Varun mentioned, people will increasingly be managers of these AI employees that do a lot of the basic work for them.
AI might handle the “what,” while humans take responsibility for the “so what,” similar to how managers work with people.
A lot is going to change over the next few years, but one thing I’m confident in is that people inherently want to be part of something bigger than themselves.
I don’t believe in a world where we all get universal basic income, lie on the couch, and do art. People want to create, help others, and make the world a better place.
AI will change things and help us do that in ways we can’t now, but that fundamental desire is innate. We won’t be relaxing while AI does all the work—we’ll still strive to leave the world a better place, just with different tools.
On Billion-Dollar AI Startup Ideas
Q: While building products at OpenAI, you must come across ideas that could be startups but aren’t your priority. Can you share a couple of ideas—ideally worth a billion dollars—that someone could build using what OpenAI is creating?
Kevin: I’ll give you a general one and a specific one.
The general idea is that these models are not intelligence-limited—they’re teaching-limited.
They can learn anything; it’s about what you have to teach them. There are huge opportunities for big companies to be built leveraging private data.
The majority of the world’s data is not public—it’s locked up behind the walls of enterprises, institutions, and governments. You can use that to make models incredible.
You’re already seeing this in legal and other fields, where private data is used to augment models and make them amazing at specific functions, like legal workflows. I think you can do that across many different areas.
The specific idea, which is super relevant for the whole world but particularly for India, is personalized tutors for every child.
I really want every kid to grow up with a personalized tutor, and I don’t know why it doesn’t exist yet.
My kids don’t have one, and I don’t know if you know anyone who does.
Every study I’ve seen shows that kids with normal education plus a personalized tutor are standard deviations above those who just get a normal education.
And there are kids all over the world who don’t even get what we consider a basic education, but they probably have access to phones, and ChatGPT is free.
I want to see a world where every kid grows up with a personalized tutor that can teach them anything, push them as hard as they want to be pushed, and let them grow as fast as their innate intelligence allows.
The world would be a better place, and the AI can do it today—we’re not waiting for some breakthrough four years from now. I want to see someone build this and get it to three billion kids on the planet.
Q: Indian students aren’t used to kind tutors—we need our teachers to yell at us if we underperform!
Kevin: We can train a model that’ll yell at you in unhinged mode!
Q: What if a kid has a lot of “why” questions, and the AI can answer them, but by the time the kid grows up in 20-25 years, a lot of that education might not be useful? How do you think about schooling and the future for your own kids, given this rapid change?
Kevin: Education has to change, just like how calculators meant we shouldn’t teach people to do super long division as a skill anymore.
Education should evolve with AI.
There’s value in being in a classroom in a social setting with a teacher who can guide you, and there’s value in having a one-on-one experience with an infinitely patient AI that can go at whatever pace you want, where you can ask anything without feeling silly.
Together, these could mean 16-year-olds in the future are five grades ahead of 16-year-olds today. If we could do that, the world would be a fundamentally better place.
We could do so much better than the education system we have. This is my personal passion—I really want to see someone take the world by storm with an amazing AI tutor.
At OpenAI, we would go to the ends of the earth to support someone doing this at scale. It’s one of the most tangible ways we can improve the world.
On Voice Interfaces and Robotics
Q: I thought voice interfaces would take off, like in the movie Her, where we’d talk to AI all the time, but I find myself texting AI more often. Why hasn’t voice mode picked up as much as expected, and do you have any guesses based on your product work at OpenAI?
Kevin: I think it’ll get there. We have a lot of folks who are power users of voice mode. I know people who walk home from work—whether to their car or the bus—and talk to ChatGPT the whole time, taking it through their day, discussing meetings, and coming back with a set of to-dos after debriefing their entire day.
There are a bunch of really cool use cases for voice. I think it can be way better than it is. Right now, if you leave a gap in the conversation, voice mode will jump in because it thinks you’re done talking, but humans don’t do that—they take cues in ways the AI doesn’t today.
There’s also the issue of talking over each other, where it stops and starts. As humans, we learn to talk over one another a little bit and pick up on cues, and we haven’t fully mastered that with AI yet.
More from
Digital Learning
category
Get fun learning techniques with practical skills once a week to keep your child engaged and ahead in life.
When you are ahead, your kids are ahead.
Join 1000+ parents.