The Most Insane Week of AI News In Months!
The AI world had a busy week with significant developments and announcements.
- OpenAI introduced Code Interpreter for Chat GPT Plus members, showcasing various impressive use cases.
- Chat GPT with Code Interpreter can detect faces, create animations, and interpret data from CSV files.
- Sarah Silverman and other authors filed a lawsuit against OpenAI and Meta, alleging copyright infringement.
- Stable Diffusion XL 1.0 was previewed, allowing users to train the data set through a voting system in the Stable Diffusion Discord.
- Anthropics released Cloud 2 into open beta, offering a large publicly available model that can be used for free in the US and the UK.
- Beehive added AI features to its platform, including an AI writing assistant and AI text tools.
- Shutterstock expanded its partnership with OpenAI, introducing Notebook LM, which turns files in Google Drive into a searchable database.
- Elon Musk announced x.ai, a collaboration with top AI researchers, to address AI-related concerns.
- Other notable updates include Epidemic Sound’s Sound Match, Bard’s new features, Stable Doodle by Clip Drop, and Leonardo AI’s Prompt Magic version 3.
So of course, the busiest week that we’ve probably seen in the AI world since March of ’23 just so happens to be the week I’m on vacation in Colorado, so that’s awesome. But don’t worry, I’ve managed to keep up with all of the things that are happening, and I’m gonna break it all down for you right now.
If you’re not familiar with what AI is, well, Vice President Kamala Harris has a great explanation for you. AI is kind of—it’s first of all, it’s two letters, it means artificial intelligence. Now that we’ve gotten that out of the way, let’s talk about Code Interpreter.
Last week, OpenAI announced that Code Interpreter would be available for all Chat GPT Plus members, and this week we finally all got access to it. The cool stuff that people have been doing with it has been all over Twitter lately. Now, I do plan on making a full breakdown of some of the coolest things I’ve seen Code Interpreter do, but for now, here are some examples of what some people have used it for.
My buddy Amar here actually made a video game where Elon Musk fights Mark Zuckerberg using Code Interpreter. We can check out the results here where Elon and Zuck fight, and this was all done in what he says took only 20 minutes to code up. He said he asked GPT-4 Code Interpreter to make an Elon Musk vs. Zuck cage fight game, and it pulled it off in 20 minutes. It goes on to say that Mid-Journey generated the Sprite sheet to actually get the images inside of the game.
Here’s an example from Skullskip showing off the Chat GPT with Code Interpreter can actually detect faces. And here’s another example from Chase Lean where he has this really wide image. He uploaded it into Chat GPT with Code Interpreter and managed to get Code Interpreter to actually pan across the image for him and make this really cool animation.
Now, to get access to Code Interpreter, first off, you have to make sure you are a Chat GPT Plus subscriber. Come down to your little three dots settings here in the bottom, click on settings, click on beta features, and make sure that Code Interpreter is enabled. Here, once you’ve enabled it, underneath the little GPT-4 button here, you should see the ability to use Code Interpreter.
Another thing that Code Interpreter is really good at is interpreting data that you upload from something like a CSV file. So here’s an example that I created where I made a column with dates, a column with foods that I ate, and then a column with how I felt after eating that food. Now, this is all just fake example data for demonstration purposes, but if I export this here as a CSV file, I could come into Chat GPT with Code Interpreter enabled, I could drag and drop this CSV file into Code Interpreter, I could then enter a prompt like this: ‘Here’s a CSV file of foods that I ate and how I felt after eating them. Please analyze the data and find correlations between the types of food I ate and how I felt afterward.’
You could see it starts by duplicating the rows inside of Chat GPT here to show you an example of what it’s finding in the CSV file. It then generates another spreadsheet here with analysis of the various foods and how I felt after eating them. Then it found the correlation. It says, ‘Interestingly, all foods that are labeled as fried have been associated with a headache.’ So it correctly identified that any foods labeled as fried foods are the foods that resulted in the headache. So it properly found the correlation that I was hoping it would find.
Also this week, Sarah Silverman, along with a handful of other authors, are suing OpenAI and Meta for copyright infringement. They’re claiming that the books that they published were actually scraped by these companies for use in their large language models from sites that acquired them illegally, like torrent sites. And the claims the authors say they did not consent to the use of their copyrighted books as training material for the company’s AI models.
Also this week, we get a sneak peek at Stable Diffusion XL 1.0. Previously, we had the ability to use SDXL 0.9 over on sites like Clip Drop. But right now, we can actually use XL 1.0 inside the Stable Diffusion Discord. In fact, if we jump into the Stable Foundation Discord here, the way it works is we can prompt an image, it will give us two images, and then it asks us to vote which one we like better, A or B. This is helping to train the new data set for SDXL 1.0 inside the Stable Diffusion Discord. We would create a prompt by typing ‘/dream’ and then our prompt. So in this example, ‘A gray wolf in the woods surrounded by snow.’ You can see it then gives us two options to choose from, Option A and Option B, and then we tell it which one we actually prefer.
I prefer Option B. I can click vote B, and I have just helped train the algorithm to improve Stable Diffusion XL 1.0. We also have the options to redo, resize, and restyle. In this case, it added the style isometric, and now we have two new options to choose a favorite between, and I think I like Option A better than Option B. Let’s go ahead and vote A, and once again, I just helped improve the SDXL 1.0 data set.
I will make sure it’s linked up below so you can jump into their Discord and play with it yourself and start seeing what SDXL 1.0 is capable of. And some of the biggest news that came out this week, Anthropics released Cloud 2 into open beta. So anybody in the US and the UK right now can use Cloud 2. It’s also available via API for specific companies.
I am going to do a deeper dive video on Cloud 2, so I’m not going to go into too much depth on this video, but some of the highlights of what Cloud 2 is capable of is that you can input up to a hundred thousand tokens, that’s roughly 75,000 words, between input and output of what you can get from this Cloud 2 model. That makes it the largest publicly available model that we can use right now, and I’ve been using it quite a bit, quite honestly, for a lot of things. I prefer it over Chat GPT and GPT-4 at the moment, and you could use it completely for free while it’s an open beta right now. Simply go to cloud.ai, chats, it will ask you to log in, and you can start using it for free if you’re in one of the countries that it’s available in.
Similar to Code Interpreter, we can also upload files and have it work with those files. So, for example, if I upload this same food test file and give it the prompt, ‘Find correlations between the foods that I eat and how I feel after eating them,’ you can see that, similar to Code Interpreter, it finds the proper correlation based on the data you provided. Honestly, I’ve been very impressed with the results that I’ve been getting out of Cloud 2. Again, I am going to do a longer breakdown on Cloud in a future video. Make sure you’re subscribed to the channel to make sure
you can catch those. Also, this week, the email autoresponder company Beehive, the company that I actually send my weekly Future tools newsletters through, announced that they were adding a bunch of AI features into their platform. They’ve added an AI writing assistant and AI text tools. Here, you can actually see this is an image that I generated using Beehive’s AI image generator, and I mean, it spelled “you” wrong, but I’m actually surprised that it got the text fairly decent. So if you have an email newsletter, this is not sponsored by Beehive whatsoever, it’s just the tool I use. They added some cool AI features, so maybe it’s one you could check out if you have or want to send a newsletter.
Also, this week Shutterstock expanded their partnership with OpenAI and they signed a six-year deal to be the exclusive distributor of OpenAI’s image generation models. Now, this is pretty cool because it’s going to give us access to OpenAI’s state-of-the-art image generation models, and it’s going to allow Shutterstock to offer unique and diverse image options to their customers. So it’s a win-win for both sides.
Another big announcement that happened this week, but also a fairly cryptic announcement, is that Elon Musk announced X.AI. He’s working with some of the top AI researchers in the world, people from DeepMind, OpenAI, Google Research, Microsoft, and Tesla, and the University of Toronto. They’re coming together to develop advanced AI models that prioritize safety and ethical considerations. Now, there’s not a whole lot of details available about X.AI yet, but it’s definitely an interesting collaboration to keep an eye on.
Now, the last thing I want to talk about, I’m just going to touch on this really briefly, is research from Microsoft. They put out a paper on a new AI system called ChatGPT++, which is a variant of OpenAI’s ChatGPT. It aims to improve the user experience by addressing issues such as the system’s tendency to make things up. Now, it’s still just research, and there’s no way to actually use this or test it out for ourselves yet. There haven’t been any big announcements from Microsoft about when this might be available for users, but it’s an interesting development that shows ongoing efforts to enhance AI systems.
That wraps up the AI news from this week. It’s been a busy and exciting time in the AI world. Even on vacation, I find time to stay updated with all the latest advancements. I’m looking forward to getting back and creating videos on all these new tools and features. There’s a lot to explore and experiment with. Thank you so much for tuning in, and I appreciate your support. It’s been a crazy week in AI, and I’m here to break it down for you.