Mon 05/22 - ChatGPT out at Apple, in at NYC schools, and finishing up its freshman year (Recipe: Auto Email Responder)
E13

Mon 05/22 - ChatGPT out at Apple, in at NYC schools, and finishing up its freshman year (Recipe: Auto Email Responder)

Adam:
It's Monday, May 22nd. This is Accelerate Daily. Today we've got Chat GPT out at Apple, back in at New York City Public Schools, just finishing up its freshman year of college, and we've got an AI recipe to automate email responses. Put on your goggles. Let's jump into it. Okay. Welcome back everyone. I'm Adam.

gptboss:
My name is Mackenzie. Good morning.

Adam:
And we're back with three links and one hack to keep you caught up on what's happening in AI today. What are we looking at for lead image on the slides today?

gptboss:
So I call this composition Gator Baby,

Adam:
Gator baby.

gptboss:
because those are the two elements. We have, this is, yeah, it's like action photography. There's a baby that could have been like a young steve Irwin interacting with a, I suppose, an alligator. I'm not really a reptile identification expert, but. it is entirely too close for how dangerous that animal normally is. And they both look so realistic, right? Like this could be real action photography with a baby and a gator. But I find that unlikely.

Adam:
This one came from rslashmadejourney on Reddit from a poster by the username of CanadianWeed.

gptboss:
Hehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehe

Adam:
It says, baby swimming with gators. Is this normal in Florida? It's a whole series of various action shots of way too young children just playing with alligators in swamps.

gptboss:
Specifically alligators and swamps.

Adam:
I thought this one was great because for me, anyone that bumped into the recent talking point about like kids identifying as cats and teachers setting up litter boxes in, in

gptboss:
What?

Adam:
schools. Oh, that was a, that was a popular right wing talking point here about how woke has gone crazy. It turned out to not be true, but

gptboss:
Oh,

Adam:
like,

gptboss:
okay

Adam:
but,

gptboss:
good.

Adam:
but, but senators were repeating it like on the floor of Congress.

gptboss:
Oh yeah.

Adam:
before it finally people said, guys, that's ridiculous. And

gptboss:
Brutal.

Adam:
there weren't even pictures.

gptboss:
Yeah.

Adam:
So, you know, part of this segment is about, hey, how great is AI? But part of it is also about, okay, so if somebody said, well, down in Florida, you kids are swimming with gators and could show you a picture, you know.

gptboss:
This one, it sets off like uncanny valley, right? It's like giving like uncanny because this is too good of a picture for Florida, basically. You know what

Adam:
Hahaha

gptboss:
I mean? Like this is like a studio photograph

Adam:
Right.

gptboss:
and I would just expect to see like zoomed in like iPhone, like blurry and like artifacts and stuff. If this was actually happening in real life, I wouldn't expect to see it this way. So we're like, it's so interesting how our cognition immediately builds defense mechanisms to like new threats. so fast. I hope it continues. I hope it continues for the people that it matters for. Not just a podcast for hosts that are in tech

Adam:
Just,

gptboss:
all the time,

Adam:
and

gptboss:
but

Adam:
right, and, and, and that it doesn't land in that place of feeding confirmation bias until

gptboss:
Totally.

Adam:
there's a civil war or of some sort.

gptboss:
Yeah, fingers crossed on avoiding confirmation bias.

Adam:
Okay. First up for the, uh, for the news links, we got no AI for Apple employees. Uh, the wall street journal reporting Apple restricts employee use of chat GPT, joining other companies wary of leaks. Um, this is the one that, that we watch closely at mission control, uh, you know, specifically because we, uh, provide a service that we hope helps with this. But it's an interesting look at, you know, a wave of companies starting to worry. that these tools are a, uh, leak point, um, for, you know, trade secrets and, and sort of, uh, corporate, you know, uh, things you don't want your place sort of blabbing about in the world. Um. People are inclined to go to these agents and say, Hey, we're doing a marketing campaign for X, Y, and Z. Well, you just told the bot and if you're on the free version, potentially the training corpus, like what you're up to in the next six months of marketing at Apple, which is, you know, a very protected secret.

gptboss:
So that's the thing that's interesting about the story to me is like how, how far in the future do you need to worry about your stuff getting indexed? Cause eventually it will get in like what, what constitutes a leak? What constitutes private data or trade secrets? Cause eventually, um, those trade secrets are going to be public data. So if the delay is such that it doesn't hit the AI until you've already publicized everything, what's the big deal? But like maybe there's stuff that they're working on that they don't ever want to publish. You know what I mean? I'm wondering what the time horizon is.

Adam:
Yeah, that's a good point in terms of how to look at it, which is the difference between how do you make these tools available in the shorter term so that your employees aren't deprived of a productivity multiplying, modern productivity multiplying tool. And you're right. After a point, if it's already public, you don't care. In fact, It might go the direction of what we talked about last week, which is now you suddenly want a thousand X the, the, the amount that you're talking about it and the amount that your agents are aware of it, because those, those, those AI agents are going to be the touch point for when you, when you go and say, who are the top five providers for blah, blah, blah. Like Apple wants Apple

gptboss:
Yeah.

Adam:
to show up there. Right. So.

gptboss:
Yeah, totally. It becomes a recommendation engine after some time. So it's just, it's, I'm sure Apple knows what they're doing. And like the headline is written in such a way. Apple restricts use. It doesn't ban, right? This isn't like a blanket ban. This is just like, Hey guys, come on, let's be sensible about what we're doing here.

Adam:
But it's a perfect transition into the next topic, which is the opposite. NBC news reporting New York city public schools remove chat GPT ban. Cities education department had announced a ban on the chat bot from its schools, devices and networks in January. In this case, it's it's New York City Public Schools going the other direction I would suspect realizing that they couldn't enforce it is part of this.

gptboss:
That's what I believe as well.

Adam:
But the way that they talked about it was way more about fairness of access and the power of these tools for, you know, as educational aids, things like that.

gptboss:
Yeah. He said the situation has now evolved into an exploration and careful examination of this new technology's power and risks. So he still doesn't like it. Um, I want to go the other way. Like something I've been letting tumble around in my brain on like the UI point is let's just replace the whole school system with AI because it can be self-directed and self-paced. And there's a lot, as I've been like getting into this, there's a lot that comes from school systems that have to do with like socialization and, and essentially domestication of. children, especially teens, because they're so wild. They like, evolutionarily, when you're a teenager, you need to be like leaving your tribe and doing just crazy shenanigans as far away from home as possible. So school kind of like starts building these like social circles that we need to kind of survive and thrive that we eventually need to hang on to. And that's one benefit that AI can never ever ever touch. But as far as the education portion goes, what school purports to be for publicly, which is to teach people skills and information they need to succeed in the world, they're way worse than AI. because they can't allow for self-direction. So I think that there's kind of, like they see it as like a threat, and now they're saying we can't do anything about this threat, so let's try to like embrace it. And I don't, I predict that they're not gonna be able to do that either. So, I think that there's kind of, like they see it as like a threat, and now they're saying we can't do anything about this threat, so let's try to like embrace it. And I don't, I predict that they're not gonna be able to do that either.

Adam:
One of the first people I went to when I was looking at moving to working at an AI startup was, is an old high school teacher of mine that I still talk to him to this day and, uh, I asked him, you know, what he thought of it. He's an English teacher, ninth through 12th grade. Um, and he said, you know, this isn't going to matter for good teachers. Because you can still make up questions that, that, that the, that an agent wouldn't have an answer to in terms of, you know, easily just personalizing it. Right. Like, because if the point is just learning synthesis and learning, like reading comprehension, then just make the, make the, make the, make the question, what five person band would you put together made up of teachers at our school? Right. And that ends up immediately esoteric enough that an agent's not going to be as helpful with that, like you're going to be able to tell the difference. Right.

gptboss:
Totally.

Adam:
So that's already just equipping the teachers to think about what they're doing in a different way.

gptboss:
Mm-hmm.

Adam:
But

gptboss:
Yeah,

Adam:
the thing

gptboss:
another...

Adam:
that I said to him and he was like, yeah, that's a good point is to what you're saying it's the self direction, but it's also it's, you can think of it as another way, which is, or, or, or sort of self direction, but through another lens, which is that there's finite time, right? If you have one teacher to 30 kids, you explain it the one way. And then like, how much time do you have for questions before you have to move on? Or else you're not, you're not going to get to the next thing to cover all the lessons before the end of the year. That kid that says, I still don't get it. I still don't get it ends up getting left behind. And made to think that they're stupid potentially,

gptboss:
broken

Adam:
because

gptboss:
and

Adam:
it's

gptboss:
a

Adam:
like,

gptboss:
bad person.

Adam:
cause you didn't get it on the third. Yeah. And it's sort of like, well, that's, that's, I'd be willing to guess most of the time. That's just the problem of not having enough iterations and the teacher not having enough time to back up and try again, back up and try again, back up and try again, anyone who has a child is familiar with that aspect of just sort of like, I said it one way. Kid took it in a crazy direction and then you have to go, no, no, what I meant is five, six, seven, eight, nine, 10 times years over years before they internalize what the lesson you're actually trying to teach them is.

gptboss:
I wonder with that too if it's something of an emotional problem. So the way that I see education winning this is that they can't do the thing that they purport to do anymore. The tests, assignments, and just this kind of memorization is not going to be possible. And some of the things that I would like to see people get better at is these socialization tasks, or even performance tasks. To your point earlier about the test of writing about a band of five teachers from our school. thought that I had was like you could turn in the paper but then you have to do question and answering on it based on like memorization that would demonstrate comprehension and this would also include like public speaking performance interpersonal like speech right like a lot of skills that are kind of underdeveloped and becoming more and more important each passing day especially as they're like communication paradigm shift from like written word to video content so on and so forth anyways I'm ready to wrap this one up if you were

Adam:
Yeah. I mean, the real problem is I could talk about it forever.

gptboss:
Totally,

Adam:
Um,

gptboss:
me too.

Adam:
and it'll come up again.

gptboss:
Yeah.

Adam:
But what I do hope is that this possibly breaks the obsession with measurement

gptboss:
Mm-hmm.

Adam:
versus like actual learning, right?

gptboss:
Totally.

Adam:
Um, a lot of the tasks that we're going, what will we do? School is broken. What they mean is measurement is broken.

gptboss:
Mm-hmm.

Adam:
What they mean is the metrics are broken that we use to decide if. People are good or not, but like. I am of a class of people who doesn't really think that they were good to begin with. Cause I spent a lot of my life thinking, I'm not very smart. Cause I got C's,

gptboss:
I

Adam:
you

gptboss:
thought

Adam:
know.

gptboss:
I was smart until I got out of school and then I started playing League of Legends and getting my butt handed to me. I don't think I'm smart after being in the world anymore. But in my small little fishbowl of high school, it doesn't match the world. I have good news though. We're not done talking about this as we move on to the next thing

Adam:
No,

gptboss:
because

Adam:
we're

gptboss:
we're

Adam:
not.

gptboss:
going to higher education.

Adam:
Yeah. I forgot about that. Taking it up a level.

gptboss:
Yeah.

Adam:
This is, uh, there's a, this is a piece from the Atlantic. The first year of AI college ends in ruin. And the subheading says there's an arms race on campus and professors are losing

gptboss:
Mm-hmm.

Adam:
the, again, that headline is tilted toward that space of just like, Hey, I was ruining the world,

gptboss:
Yeah.

Adam:
but like the article is actually just about the back and forth. it, the nothing is ruined and professors aren't losing. The article is actually about how all of the platforms that purport to be able to look at an essay and say, this is 18% AI generated, this is a hundred percent. AI generated, aren't very good at it.

gptboss:
I have a thing, I have an industry insight on

Adam:
Yeah.

gptboss:
kind of this development. Um, because people, people like email me in and they're like, Hey, does GPT boss content pass AI detection? And from January until, um, uh, two weeks ago, uh, so the start of may, it was passing like 98% human, right? It didn't matter what kind of checker you were using. It was 98% human. And then in the last two weeks, it flipped to being like 11% human. But what happened at that same time? is a lot of content that I was writing by hand to test was also getting marked as

Adam:
Right.

gptboss:
like 11% human. So these these AI detectors, they either swing like, there's ways to fly under their radar because they're too loose. And they want to like encourage that like, oh, yeah, people human writing, we never want to have like a false positive AI detection. And now they've kind of gone the other way. Like, we don't really care about the false positives, we don't want to have any false negatives. And so there's just way more negatives.

Adam:
Right. And that's kind of the essence of the article is, you know, there are stories of people saying my professor came and said, this is a hundred percent AI written and it wasn't, I didn't use AI at all. Uh, and then there's other people saying, of course I'm using AI for these things. Like, what are you crazy?

gptboss:
Yeah.

Adam:
And I've talked to professors that are sort of like, oh, immediately a hundred percent, even the teacher I referenced earlier, he said, oh, the kids, I was like, do you think people are using? He was like, think I

gptboss:
Hehehehehehe

Adam:
know. The day it dropped, they came in and went, I'm using this for my essays now.

gptboss:
Yeah.

Adam:
And he's anyway, okay, great. You know, but, but the, it's, it's so many interesting directions to go with this. Right. The back and forth of the, the idea of detection tools, like arms race is the right way to say it. Um, but I also grew up through the era of everyone believing that, that spell check was going to be the reason that none of us learned to spell. Um, like I, like I lived

gptboss:
It didn't

Adam:
through

gptboss:
help.

Adam:
grammar check and spell check as the, as the, as the boogie men that were going to cause it, you know, the, the, the downfall of modern children.

gptboss:
Yeah.

Adam:
Um, and to be fair, there was for sure a period where that was true, but also over time, I've internalized a lot of the things that spell check used to, you know, like You, you notice it doing it over and over again. And eventually you internalize, oh, this is the right way to spell that thing. I think,

gptboss:
There's a lot

Adam:
um,

gptboss:
of people that just turn it off, especially people that speak very colloquially over text,

Adam:
Right.

gptboss:
because the slang words that they're using are evolving too fast to be used in the dictionary. So a lot of local dialects and forward-thinking cultures don't use that tool at all. And sometimes it fails spectacularly. Sometimes people don't know how to spell like psoriasis or something, which I don't know if anyone on Earth actually does

Adam:
Yeah, there's

gptboss:
without

Adam:
a few

gptboss:
looking

Adam:
of those

gptboss:
it up.

Adam:
that it's like, how many times have you written diarrhea in your life,

gptboss:
Oh,

Adam:
you know,

gptboss:
buddy.

Adam:
like, um,

gptboss:
So

Adam:
it's,

gptboss:
yeah,

Adam:
it's

gptboss:
totally.

Adam:
spelled real weird.

gptboss:
Yeah. So but like, it just it didn't, it didn't fix it, right? Like, the tool was like purporting to like fix something that it never truly fixed. And there was no overreliance that ever came in. And the people who are the best at language, right, the people on the forefront of linguistics developing new dialects, don't use these tools. So I think something similar is going to happen with AI. Like by the time that you get to your PhD, AI is just gonna be a spell check on your ideas, but the actual thinking has to happen outside of the box.

Adam:
And again, that gets to the testing, which is you need the question to not be a thing where that matters, that where the form, the form doesn't matter, the ideas or what's supposed to matter, um,

gptboss:
Totally.

Adam:
which, you know, is the thing that I bumped into. Even in my philosophy major, I got, I was like, I was stuck at a B all the time. Cause I'm really bad at remembering who said what. And so I always would do poorly in the portion where it's like, we sure wish you quoted more people. And I'm like, yes, but I quoted all their ideas. I just don't remember if it was Kant or whoever that said it. But yeah, I throw

gptboss:
Kinda.

Adam:
this one in here mainly because I love that they're grappling with it on all levels and I love the existential crisis that it's causing because it's due.

gptboss:
Oh, yeah, yeah, higher education is such a mess they need. They like it's like one of those times where you just like wish they like you want to like take out the drawer and like turn it over, dump everything out and just like start over.

Adam:
But check out the post. Good, good conversation

gptboss:
I do.

Adam:
and a,

gptboss:
I like the Atlantic. Yeah.

Adam:
and a fun, and a fun lead image of a robot in a graduation gown.

gptboss:
Totally. So that's it. That's all for news.

Adam:
That's all for

gptboss:
Right?

Adam:
news. Let's get to the, let's get to this, this, this week's today's neural network nourishment nourish today's neural network nourishment, where we've got, uh, this one came from you Mac,

gptboss:
Yeah.

Adam:
uh, auto email responder.

gptboss:
Yeah. The slide says Responder, which I think is a good name for it, because there is some detail missing from the text block as well. So if you are reviewing this code, and you type it in character for character, and then you run it, it is not going to work, because there is some details that are missing. Actually, using IMAP is a little bit complicated, and the package that I am going to be using is called Node IMAP, if you want to look that up.

Adam:
So before we jump into that, what do we, like what's the, what's, what do you,

gptboss:
What is this for,

Adam:
what's the

gptboss:
right?

Adam:
goal here, yeah.

gptboss:
Yeah. So a lot of people have written to me and they're like, Hey man, I get a lot of emails and it's a big waste of time. Is there any way that AI can respond to my emails for me? And I was thinking about it for, it's been about two months that I've been turning this over in my noggin. And finally I'm saying, yes, I knew about this method two months ago, but there was a security consideration that I'm going to flag here. And then I'm going to explain why I don't really care about that security issue anymore. So this is for auto email responses. It doesn't actually automatically send the email, right? Like this is... You're not Canadian, so you don't know about this, but in Canada we have an act called CanSpan. So there's like certain restrictions that I have to follow when I'm designing software that not everyone else in the world has to.

Adam:
We have a version,

gptboss:
Like

Adam:
but it's

gptboss:
your click-up thing

Adam:
li-

gptboss:
from yesterday would have not been CanSpan compliant.

Adam:
Yeah.

gptboss:
So... Although, there's some ways that you could argue it, but we're not really interested in loaning here. But this is a can-spam compliant thing, because at the end of the day, it doesn't actually send the email. It posts it as a draft where you can review it and decide to send it or not. But let's walk through this kind of idea by idea. So the very first thing is creating something called an IMAP. And an IMAP is a computer's representation of what your email account is. So inside of an IMAP, you have the ability to send and receive emails. There is like flags saying, hey, you just got a new piece of email. And there's also like boxes. So if you're organizing your emails and you're saying, well, like this one's from this sender, so it's going into this box, that's all of that kind of stuff is in there. But like the main box that we're worried about is generally the inbox. So to get into and start selecting the inbox, we need to create this IMAP object, which takes your email address, your email address password, where this is hosted, which we should be able to detect from. kind of your domain name, like all Gmail accounts are hosted at imap.gmail.com. Your port and then your TLS, and TLS is a security setting. But basically all that the user has to worry about if you're using software that does this is like, what's my email account, what's my password? I did not like that. That was the security consideration that I hated, is like, you gotta give me your email password for this to work, and then I have to process it as plain text in order to get into your IMAP. And so the reason why I don't care anymore is because there are things like, Um, your Zapier thing that got set up the other day, right. People are already giving access to their emails to software that runs automatically autonomously without kind of any human intervention, which I didn't like, I didn't like a live server, just having your password in memory, but there's so many tools that are then there's like more and more every day that have tens of thousands of users where people are already doing this. So even though it's not. A security practice that I would recommend for somebody with like my security hat on. If you're going to do it anyways, then no harm, no foul. So that's, that's how I like upgraded my moral standing on this one and started to agree to do it. Um, so then walking into the AI stuff, uh, we have this idea of like priming the agent as an email answering AI. So you could pass it a lot of details in here. This message can be quite long. It could be up to like, um, I would limit it to around 4,000 kind of characters. which is a lot of detail about your company. It's like having three or four about pages inside of this priming thing. So you could get a lot of details in there on various products or divisions that you have. You could have large bios on like all the staff. So this could be a directory kind of bought. That's like, here's the contact information for that person that you're looking for. So you have a law office with like a hundred people with a hundred bios and a hundred pieces of contact information that could all stuff into this next thing, the priming message. Um, the priming sequence is much simpler. This is where we're just saying, Hey, I got this email from whoever you got your email, your most recent email from. So they know like the name of the email. So the AI knows, and then you post the email and then say, Hey, can you just draft a response and because of all of this stuff happening inside of the system message, all of the company details are available to the AI, this draft response is going to be pretty close to factually correct and there's no other special engineering that we have to do. We just have this one priming message that explains the company and then the email that just came in and then we get a response. and then it gets posted to your drafts. What do you think, Adam? Anything I missed? Any questions?

Adam:
No, sounds, it sounds useful. But, uh, like I, like I say, with a lot of the ones, when we show off code like this, um, it's a great example that like, I think one of the most valuable things about this recipe section is it's not just, uh, what we can show people that they can do, but it's, it really is a concrete example of how to sort of modulate your like worldview around where this stuff shows up

gptboss:
Mm-hmm.

Adam:
because. When you're talking about this kind of access, the reality of this sort of offering is... If you're not worried about using it for something right now, like a lot of the people working on our platform or your platform are, you can probably just wait until Google builds this in because it's 15 lines of code, right?

gptboss:
Hmm.

Adam:
And they have their own AI for it, right? Like

gptboss:
Yeah.

Adam:
there's, it's obvious that there's going to be a, an automatic drafter response button in Gmail that just uses Bard to take a shot at it. And it

gptboss:
Yeah,

Adam:
really

gptboss:
and

Adam:
gets

gptboss:
certainly.

Adam:
to that thing of like, it really helps you understand how there's going to be ambient intelligence everywhere, right? This is less code than it would take for me to customize a pop-up button on a front end of a website, like, so that it would show you a Calendly invite instead of a just take you to a link. Right.

gptboss:
And

Adam:
Uh,

gptboss:
this is the hard way. Google has

Adam:
yeah, this is

gptboss:
more

Adam:
right.

gptboss:
access.

Adam:
Right. Um, but in the meantime, we'll have a link to this implementation. If you want to, if you want to go play with it. Otherwise, thanks everybody for joining us for another Accelerate Daily. Uh, I've been Adam.

gptboss:
I've been Mackenzie.

Adam:
Take it easy, everybody. We'll see

gptboss:
See

Adam:
you tomorrow.

gptboss:
ya.