-
Notifications
You must be signed in to change notification settings - Fork 0
/
podcast-3.json
1 lines (1 loc) · 25.3 KB
/
podcast-3.json
1
{"podcast_details": {"podcast_title": "The AI Breakdown: Daily Artificial Intelligence News and Discussions", "episode_title": "Meta to Launch Open Source Coding AI as Latest Assault on OpenAI", "episode_image": "https://megaphone.imgix.net/podcasts/9ad36894-20f2-11ee-9d6c-d76aa9b66d23/image/BITCOIN_BUILDERS_3.png?ixlib=rails-4.3.1&max-w=3000&max-h=3000&fit=crop&auto=format,compress", "episode_transcript": " Today on the AI Breakdown, we're looking at Meta's latest plans for competing with OpenAI. Before that on the Brief, a jailbroken nest shows what the future of AI-powered devices might look like. The AI Breakdown is a daily podcast and video about the most important news and discussions in AI. Go to breakdown.network for more information about our Discord, our YouTube, and our newsletter. Welcome back to the AI Breakdown Brief, all the AI headline news you need in around 5 minutes. We kick off today with one that is a little bit different and a little bit more fun than our average kickoff. Justin Alvey is a hacker who takes apart hardware and tries to integrate AI services to show in some ways what the future of AI devices might look like. Twitter has been abuzz with his latest experiment, in which he took apart a Google Nest Mini and set it up so that you can actually run your own LLMs, agents, and voice models on it. You can see a demo of him using it to manage his message from various services, as well as having a brief conversation with a long-dead cultural icon. Hello, I'm here. Hey, do I have any new messages today? Hey Justin, you have three new messages. There is one from your dad on WhatsApp about some unexpected snow, one from David from Twitter sharing a link to a research paper, and Jenny is messaging the tennis group on Signal about playing this afternoon. Thanks. And did John message a few days ago about an event? When was it again? Yes, John sent a message on Friday on Messenger about a birthday barbecue at Golden Gate Park this Saturday at 11am. Okay, great. Can you reply and say that I'll be there? Sure. I replied to John saying that you'll be there this Saturday. One more thing. Is Alan there? Can I speak with him? Sure. One moment. I have a question for you. Hello, Justin. How may I help you today? Hey, Alan. I'm just working on a demo right now, but I look forward to chatting more later. All right. Good luck with the demo and let me know if I can help with anything else. Otherwise, have a great day. After sharing the demo, Justin gets into the particulars of how he actually did this modification. Ultimately, he says he's working on open sourcing the design, build instructions, firmware, bot, and server code, and he thinks he'll have something in the next week. Now on the one hand, this is sort of just a tinkerer doing a classic tinkerers project. But on the other hand, it's exactly this type of project that's likely to show us what the future of AI enabled devices is going to look like. Speaking of things that people are building, one of the areas of AI that I am personally following most closely is music generation AI. You will have probably picked up on the fact that I love music in all of its various dimensions and I'm super intrigued by the future of music production in an AI powered world. Now I'm not sure how much we're going to get full text to music composition tools versus AI integrated into existing production software that changes how we use it, but there's no shortage of efforts along that front. In fact, I've noticed two this week. The first comes from Brett Bauman and is called Songburst. Brett tweets introducing Songburst, my new AI music generation app. You fully own your music, unlimited exports as waiver MP3, prompt enhancer to extend your prompt, fast, native and minimal app. Here's the demo that Brett shared alongside it. Then, just a day or two after that, I saw this announcement from Cassette AI, which offers something similar. They write we're live. Later music production democratized. I think we're getting close to the point where it makes sense to do a full show and video comparing these different services. So keep an eye out for that soon. Doing a quick survey of the industries that mainstream media is writing about being disrupted by AI, Reuters wrote a piece called From Mad Men to Machines, Big Advertisers Shift to AI. Some of the world's biggest advertisers they write from food giant Nestle to consumer goods multinational Unilever are experimenting with using generative AI software like chat GPT and Ali to cut costs and increase productivity. WPP CEO Mark Reed said the savings can be 10 or 20 times rather than flying a film crew down to Africa to shoot a commercial. We've created that virtually. No one interesting nugget from the story is how many of these companies are spinning up their own AI tools rather than just using off the shelf third parties. Unilever, which of course owns more than 400 brands, including Dove and Ben and Jerry's has a custom AI that writes product descriptions for websites and digital commerce sites and sub brands within their portfolio, such as Trezima also have their own AI content generation tools. Part of this is due to concerns around copyright and data privacy and certainly fits the larger trend that we're seeing of enterprises addressing those types of issues by spinning up their own solutions. Another industry that got featured this week around its use of AI was recycling. The Atlantic just published a piece called the future of recycling is sort of mix sort face. The piece tells the story of how at recycling centers around the country, AI powered robots are being deployed, separating and sorting different recycling materials that otherwise might take people an immense amount of time to sort. The Atlantic writes, the issue is that it's long been too hard for recycling plants to sort material with the level of specificity needed to satisfy manufacturers that could theoretically use it. The traditional recycling methods used to sort waste, including sieves blasts of compressed air, glass crushers, powerful magnets and near infrared light do a good job of separating waste into broad categories of paper, glass and metal. But finer layers of detail often go unnoticed, especially with plastic. The introduction of AI is apparently doing a much better job of that granular level sorting and in so doing is opening up new markets. Now interestingly, given those positive articles about how AI is improving industrial outcomes, the Wall Street Journal today also ran a piece called companies increasingly fear backlash over their AI work. The subheader reads, until now businesses have assumed that leveraging cutting edge technology was inherently a good thing. That's no longer the case. More and more the article writes, companies say they are concerned about facing public criticism over their use of artificial intelligence, thanks to rising fears over the technology's negative impacts, including job losses. One example they gave is Emory Healthcare, which is currently testing a generative AI tool that helps summarize recorded conversations between doctors and patients and auto generates notes. Auto generated notes are reviewed by doctors and patients have the chance to opt out. But Emory thinks that the benefits could be huge. It saves doctors time, it keeps their focus on actually providing solutions, but they're really worried that people are going to have a negative reaction. A representative from Emory said, it's going to be creepy to some people potentially, so we're going to have to be careful about that. I don't think we can ignore it. The article also looks at Workday, an enterprise cloud application provider that has said that they've chosen not to pursue some uses of AI for fears of invasion of people's privacy. And also looks at Levi Strauss, who caught a lot of flak on social media earlier this year after saying that they were going to be using AI to generate images of more body inclusive models. Then of course, there's the big dust up over Marvel Studios use of AI in the opening of the Secret Invasion miniseries. And I think that the biggest point here is that we are in a very strange and interesting in between an AI. The technology is clearly powerful, and for many people clearly helpful in their jobs. But at the same time, it is so clearly powerful and so clearly helpful that it is engendering concerns of human replacement. Now this is natural for any new technology, but it's definitely ratcheted up to 11 in the context of artificial intelligence. Layer on top of that, the massive growing conversation around extinction risk from AI, plus the pre-existing animosity towards big tech right now. And you have a pretty potent stew of consumers looking very warily at this type of new innovation. The positive side of that, of course, is that it makes us less likely to sleepwalk into negative potential outcomes from this technology. But the downside is, of course, that something which could be used for massive enhancements and improvements in productivity in people's lives might fall by the wayside for fears that may or may not ultimately end up being founded. At the end of the day, I think that this is just a phase that we're going to have to get through. And if we do it well, it really could be a good thing that people are handling this set of innovations with more skepticism than they perhaps have for previous innovations in the past. That is going to do it for today's AI Breakdown Brief. Thanks as always for listening or watching, and I'll be back soon with the main AI Breakdown. Before we get into the main AI breakdown, I want to tell you about today's sponsor, Supermanage. If you work in a professional setting, you probably have some version of a one-on-one meeting either with the people that work for you or the people that you work with. Unfortunately, all too often, those one-on-one meetings become glorified catch-up calls. Don't you wish you could jump right to the stuff that really matters? That's where Supermanage comes in. Supermanage AI magically distills your team's public Slack channels into a real-time brief on any employee, any time. Catch up on contributions, work in progress, challenges they're facing, sentiment, everything you need to show up ready for a truly meaningful conversation. And it's completely free. Visit supermanage.ai forward slash breakdown today to start making the most of your one-on-ones. And thanks again to Supermanage for sponsoring the AI Breakdown. Welcome back to the AI Breakdown. Well, as Sam Altman has learned this year, it is not easy being in the catbird seat in a highly competitive new industry. Ever since the launch of ChatGPT, OpenAI has pretty unassailably been the leader of the AI space. Now, this was something of a surprise to many in the field. How could, for example, Google have let this upstart, even a well-funded upstart, get so far ahead of them when it came to something that they had long seen as one of the most important technologies for the coming decades? How could Meta find itself out on the back foot? Microsoft at least had cut a deal to be in OpenAI's orbit, but still the leadership of this startup was surprising. Being a leader, however, is not always easy, as of course it means that everyone is gunning for you. Interestingly, a few months ago, an insider at Google released a note that suggested that OpenAI's lead might not be as unassailable as it initially seemed, and that the source of its biggest competition was unfortunately not Google or any other centralized company. But the incredible explosion of Open Source AI efforts that had really come to the fore since the leak of Meta's Llama model. That piece began, We've done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be? But the uncomfortable truth is, we aren't positioned to win this arms race and neither is OpenAI. While we've been squabbling, a third faction has been quietly eating our lunch. I'm talking of course about Open Source. Plainly put, they're lapping us. Things we consider major open problems are solved and in people's hands today, while our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open Source models are faster, more customizable, more private and pound for pound more capable. They're doing things with $100 and 13 billion parameters that we struggle with at 10 million and 540 billion, and they're doing so in weeks, not months. This has profound implications for us. Now the note concludes that ironically, it was Meta that was coming out ahead of this. The notes author wrote, paradoxically, the one clear winner in all of this is Meta, because the leaked model was theirs, they have effectively garnered an entire planet's worth of free labor. Since most open source innovation is happening on top of their architecture, there is nothing stopping them from directly incorporating it into their products. The value of owning the ecosystem cannot be overstated. Now the one complaint or one thing holding Lama 1 back was that technically it was restricted for research purposes only. In other words, there wasn't a commercial license available alongside the technology. Now of course, that didn't stop individual hackers from building things that they then released for profit, but it did put a damper on how much big enterprises or corporations were willing to build on top of what they had to offer. That of course was changed when they launched Lama 2 just about a month ago. The open source-ish release came with a commercial license that for companies who wanted to spin up their own AI solutions rather than buying something off the shelf, made it look like a really attractive option. And indeed, one of the things that we've talked about frequently on this show is the extent to which many companies have decided to resolve concerns around security and data privacy and proprietary information by opting not to work with a startup like they might have used to in the context of a different technology and to instead use these widely available and well-supported open source models or open source-ish models. I don't want to get into a debate about that terminology right now, to simply build the things that they need from the bottom up using their data inside their own walled gardens. If Lama 2 looked like a really strong attack on some of the other big closed source models, particularly OpenAI, the information is reporting that Meta is far from done. Yesterday, the information published a piece called Meta's Next AI Attack on OpenAI, free code-generating software. They write, Meta Platforms is preparing to launch software to help developers automatically generate programming code, a challenge to proprietary software from OpenAI, Google and others, according to two people with direct knowledge of the product. Meta's code-generating artificial intelligence model, dubbed CodeLama, will be open source and could launch as soon as next week. Now, this model builds on Meta's Lama 2 and is meant to rival OpenAI's Codex model, which currently powers coding assistants such as Microsoft's GitHub Copilot. The information gets the implications really clearly. They say that it could upend the AI field by making it easier for companies to make their own AI apps without paying for software from OpenAI, Google or Microsoft. Now, part of the reason that this matters is that this is one of the clearest, most product market-fitty use cases of AI so far. A GitHub survey earlier this year found that something like 97% of developers were already using AI assistants in their work. Industry commentators definitely see the relevant implications for the enterprise. Tim Chen, a managing partner of Essence VC, said, For enterprise adoption, this could be pretty big leverage to get people to use this much faster. If I'm Bank of America, I can fine-tune something on my own and have a really great model now that can generate a lot of specific code based on my existing codebase. The information piece points out that 27,000 companies are currently paying for a GitHub Copilot Enterprise license, but that, quote, CodeLama's release could represent a major disruption to the industry. The public release of a code generation model could make it easier for new competitors to get started or for large companies concerned about the security of their source code to build their own in-house models, making outside providers redundant. So if you're OpenAI, you have to be looking over at what Meta's doing with some really wary eyes, right? Luckily, though, you, of course, have your big partner in Microsoft, right? That gives you some extra big clout, right? Deeper pockets to pull from, more resources to go compete. Except that increasingly, Microsoft appears to be hedging its bets. First of all, when Meta announced Llama 2, they did it at a Microsoft conference with Microsoft as their main launch partner. One of the key takeaways from their press release was Microsoft and Meta are expanding their long-standing partnership with Microsoft as the preferred partner for Llama 2. Now, as if that weren't enough, the information also reported yesterday that Microsoft is planning an AI service with a new partner, Databricks, that might also be direct competition to OpenAI. The article starts, Microsoft has hitched its wagon to OpenAI when it comes to selling artificial intelligence that can understand language and write emails, meeting summaries and slide decks. Now, Microsoft has found a second horse to ride in the AI race, Databricks, which has positioned itself as a kind of anti-OpenAI. So Databricks is one of these companies that is dealing with this exact issue that we were just discussing, which is that big enterprises are concerned about the potential for data leaks and proprietary information being fed into AI training of third-party models. And so what Databricks does is they help companies make their own AI models from scratch or take and repurpose existing open source models effectively as an alternative for something like licensing OpenAI's APIs. According to three people with direct knowledge of the plans, Microsoft is planning to begin selling a version of Databricks software as part of its Azure cloud server unit. The information writes, In a touch of irony, Microsoft is using OpenAI's technology to create a chat GPT-like chatbot to help less tech savvy customers use Databricks software, which was originally developed for sophisticated data scientists. The net result could be that some Microsoft customers end up using open source models rather than OpenAI's closed source ones. Now of course, you understand the logic if you are Microsoft. Just because you made a major investment in a company like OpenAI doesn't mean you want to put all of your eggs in that basket. What's more, holding aside the competitiveness, it seems quite clear that there is likely to be an ongoing market for OpenAI to the extent that they can continue to develop world-class models, even if what enterprise customers are looking for is something a little bit different than the OpenAI product offering. It's clearly not just Microsoft who's getting the picture that enterprises are going to want more boutique customized options when it comes to integrating AI across their companies. Amazon Web Services, for example, has been screaming about exactly this, saying that there's no universe in which there's just one winning model. And the reason that they've created their bedrock platform is to give enterprise customers the choice in which models they actually want to use and customize. Still, it's really fascinating to see how things are evolving in this field. It suggests very clearly to me, at least, that one, there's likely not going to be just one winner and that two, companies are going to have to be extremely, extremely nimble to adapt to what the market is telling them about how companies and customers want to actually interact with this new technology. Interestingly, one of the flip sides of there being such clear use cases so fast, which is frankly a little bit different than most new technologies, is that it strikes me that customers both on the individual and on the enterprise side are more empowered to know more of what they want right away and ask for it from the companies they're going to interact with. This isn't a bad thing. It's just different than what we've seen with previous technologies. But then again, there's a lot about AI that makes it not exactly like previous technologies. Anyway, fascinating little moment in the evolution of the AI space, but I certainly wouldn't count OpenAI out yet. Later this weekend, we're going to take a look at Google specifically and all of the rumors and innuendo about what might be coming down the pipeline for them. For now, that's going to do it for today's AI breakdown. Until next time, peace."}, "podcast_summary": "In the podcast, the host discusses several topics related to AI. First, they highlight a hacker named Justin Alvey, who integrated AI services into a Google Nest Mini device, allowing it to run custom voice models and agents. They provide a demo of how the modified device can manage messages and interact with users. Justin plans to open-source the design and code for others to use. \n\nThe host then mentions the growing interest in AI-generated music and mentions two recent projects called Songburst and Cassette AI. These projects aim to revolutionize music production using AI technology. \n\nThe podcast also covers the use of AI in advertising, with big companies like Nestle and Unilever using generative AI software to cut costs and increase productivity. It's mentioned that many companies are building their own AI tools instead of relying on third-party providers due to concerns around copyright and data privacy. \n\nAnother topic discussed is the use of AI in recycling, where AI-powered robots are being used in recycling centers to sort and separate materials more efficiently, opening up new markets and opportunities. \n\nThe host then shifts to the growing concerns and backlash around AI technology, specifically related to job losses and privacy invasion. They mention how Emory Healthcare is testing a generative AI tool that summarizs recorded conversations between doctors and patients, which has raised concerns about privacy and creepiness. \n\nFinally, the host talks about Meta's plans to compete with OpenAI by launching CodeLlama, an open-source code-generating AI model that aims to rival OpenAI's Codex model. Microsoft, who is a partner of OpenAI, is also planning to launch an AI service with a competitor called Databricks, potentially giving customers more choices in using open-source models instead of closed-source models from OpenAI. The host concludes by noting the evolving nature of the AI industry and the need for companies to be nimble and adaptive to meet the changing demands of customers.", "podcast_guest": "Justin Alvey, N/A", "podcast_highlights": "Key areas and interesting insights from the podcast:\n\n1. Hacker's Experiment with Google Nest Mini: Justin Alvey, a hacker, demonstrated how he modified a Google Nest Mini to run AI services and manage messages from various platforms. This experiment showcases what the future of AI devices might look like.\n\n2. Open Source AI Models: The podcast discusses the proliferation of open-source AI models like Llama and CodeLama from Meta (formerly Facebook). These models are rivaling closed-source models from companies like OpenAI, Microsoft, and Google. The availability of open-source models enables companies to build their own AI solutions and reduces reliance on third-party providers.\n\n3. AI in Music Generation: The podcast mentions the growing interest in AI-powered music generation. Various tools and services, such as Brett Bauman's Songburst and Cassette AI, are emerging in this space. The host expresses his fascination with the future of music production in an AI-powered world.\n\n4. AI Disruption in Advertising and Recycling: The podcast highlights how AI is disrupting industries like advertising and recycling. Advertisers are using generative AI software to cut costs and increase productivity. AI-powered robots in recycling centers are improving the sorting and categorization of materials, opening up new markets.\n\n5. Concerns and Backlash against AI: The podcast acknowledges the rising concerns and potential backlash against AI, particularly regarding job losses and invasion of privacy. Emory Healthcare's AI tool for generating notes from doctor-patient conversations and Levi Strauss' use of AI for generating diverse model images are cited as examples. Companies are cognizant of potential negative reactions and are cautious about AI implementation.\n\n6. Meta's Competition with OpenAI: Meta's open-source AI models, Llama and CodeLama, are seen as potential competitors to OpenAI's models, such as Codex. Meta's partnership with Microsoft and the emergence of Databricks as a second AI option for Microsoft indicate that companies are diversifying their AI strategies.\n\n7. The Changing Dynamics of the AI Market: The podcast reflects on the evolving AI landscape and suggests that there will likely not be a single winner in the market. Enterprises are demanding more customized and tailored options for integrating AI into their operations. Companies need to be agile and responsive to customer preferences and market demands.\n\nOverall, the podcast provides insights into the current state of AI development, the emergence of open-source models, the impact on various industries, concerns surrounding AI, and the competition within the AI market."}