The Main Thread

Episode 05 Transcript

Episode 05 Transcript

Welcome back to The Main Thread.
As I record this,
the software industry is in the throes of a wholesale transformation due to the advent of generative AI... Or maybe it's not.
It depends on which headlines you read.
Will AI fundamentally change the way we as engineers interact with our work and the world or is it mostly hype?
The truth will probably be revealed somewhere in the middle.
But there's no denying the hold this new technology has on our collective consciousness.
We're joined today by David Ashe,
senior software engineer with American Express.
Alex and I, fresh out of bootcamp, met David years ago at a Meetup
he ran that encouraged n00bs like us to teach and learn from each other as we broke into the industry.
His passion for learning, his skill for knowledge sharing, and his sense of humor are infectious.
I hope you'll enjoy today's episode as we explore the benefits,
risks,
practical uses, and downright bizarre results of AI.
Welcome back to The Main Thread.
My name is Alex Geer.
I'm here along with Brian Ogilvie.
We're joined today by David Ashe,
a senior software engineer with American Express Uh David.
Can you tell us a little bit about yourself?
I'm gonna tell you everything about myself.
Now,
Get ready for a monologue now.
Um,
we only got half an hour,
man. Make it count!
I am a senior software engineer for a financial institution.
Um,
I've worked for other financial institutions.
I've worked for a company that streams video over bikes.
You put in your room and then you pedal in one place to lose,
to burn calories.
You might have heard of it.
Um,
so I've been a front end person.
I've been a back end person.
I've been a DevOps person.
I'm a little bit in love with cybersecurity
now. For about three weeks,
I intensely thought I was gonna teach myself to be an AI engineer.
I'll probably be talking about some of the things I tweaked with.
Um,
and I'm a person who has thoughts on AI,
it's funny I find myself in this show.
Perfect.
Well,
that's what we're here to talk to you about.
We're talk here to talk to you about AI,
a generative AI and how it affects our profession,
especially as we move up to the more senior levels in the software engineering uh profession.
There's kind of two ways to approach us because we're technical people in technical careers and we want to advance those careers.
So we can think about how it's gonna affect our job specifically.
But then,
um,
if you go on linkedin or any social media,
people have a lot of opinions on what's gonna happen in the future.
People seem to all of a sudden have a lot of really strong,
clear visions of how this is gonna go,
what AI is gonna be used for.
But even the people who build these LLMs at Open AI,
don't exactly know how they work or exactly who's gonna love to use them and for what reason?
So I think there's a lot of room for speculation,
particularly these technical people probably should have some say on what we think is going on with this stuff.
Well,
we don't want to be too skeptical,
right?
Because business leaders are already saying this is here and it's a competitive advantage.
We have to embrace it.
We don't want to be left behind,
right?
Non-technical people have already decided that technical people are gonna need this for their productivity.
I think we'll be talking a little more about that.
So like it's awkward to be in a position of saying,
hey,
I as a technical person,
have thought about this really deeply.
I don't know everything.
I'm not an AI expert per se,
but I have a lot of reason to be skeptical,
but it's a wave that's gonna crash over all of us and some of us are gonna get more damp than others.
Yeah.
But how exactly will we get wet when this wave crashes on us?
And what form
will that take?
How disastrous is AI gonna be?
How transformative and utopian is it gonna be?
I think we have to have some thoughts about it even though if it isn't necessarily our job to speak about it.
And then I think we'll also transition inevitably to OK,
that's nice to think about.
But at some point,
we're gonna be asked to use these at work,
we're gonna be asked to spin up,
GitHub Copilot or a customized version of GPT.
And people will be like,
why aren't you using this to generate code?
And either we'll have to come up with good arguments for what we do use it for and what it's useful for.
And we might have to say like,
I'll tell you why I'm not using it.
It's not useful in this case.
And will there be political considerations and like you have to just shut up and use it even though,
you know,
it's not making you more productive?
I guess I'm setting up what I,
how I've thought about this.
But I uh what do you guys think AI is gonna do to society?
Well,
it's funny because it's like you've said,
in some ways,
it doesn't actually even matter what we think.
Um already even at my company,
which is not nearly as big as your guys' companies.
Um We've had a number of projects that have been directed towards using ChatGPT's API just basically to say that we were first to market with it,
even if nobody even uses it,
that's already happened like multiple times,
some of those tools are useful.
But the point is that like the direction of that was towards using it for the sake of using it,
you know,
and that's the way a lot of these new tools end up being,
right is that the hype train is real and there's a significant real dollar value in being somebody who has used the newest shiniest thing.
Are you familiar with the term "glasshole?"
"Glasshole" was what people called the early adopters of Google Glass.
People were walking around with this weird thing over like it only covered one of your eyes or both.
I can't remember.
So people kind of adopted Google Glass as soon as it became available because they wanted to be the ones to figure out how good it was and you know,
benefit from it.
And I think people were shamed and like asked to like,
"Get out of here and stop recording me," was the reaction people had.
So how long until Apple comes out with their version that everyone loves and can't wait to be recorded?
Well,
it's called the Apple Vision Pro and soon you can get it for only 3000-something dollars.
Is that all? the price point?
Yeah.
Well,
I guess we'll see how that turns out.
Certainly,
I would say Apple Vision Pro seems better than some of the other offerings before.
Although it's certainly not unobtrusive,
you can't walk into a restaurant and have someone not notice that you're wearing it.
But in some cases,
that's probably by design,
like the main reason why Google Glass was killed off was because there were so many legal questions about having a device that was just recording without people knowing it was recording at any given point.
You know,
in a lot of states, or like two party consent states, where you need to both people need to agree before something can be filmed.
Yeah,
ownership questions are really coming up.
Ownership questions.
Like does it have to be,
when,
when do you have to disclose that something was generated by AI?
That's something that is not at all clear. Who really wants to talk to a chat bot and not know that it's actually... who wants to be tricked into thinking a chap bot is human?
Nobody. So this is the case where like legislation probably has no hope of catching up with how quickly the technology is going,
but it's kind of desperately needed.
So,
one of the things I I want to talk about is a lot of times when we discuss AI,
especially in our industry.
But I notice it,
even when I speak about AI with my wife or people who are not in the industry is we tend to focus on all of the risks and all of the things that we should be worried about,
but there's often not much discussion of,
well,
why do people even want to use this and what are the benefits of it?
You know,
I was thinking lately about all of the AI that's being used and trained on images of cancerous growth.
So that rather than,
you know,
needing the finite number of hours that a highly accomplished human doctor has to look at images and potentially identify cancer,
these AI models are trained to scan millions upon millions of images,
uh MRIs et cetera and be able to identify very early stage cancer.
And there's a lot of applications like that that are out there and being worked on.
There's the ability for AI to analyze brain waves in order to understand that you want your arm to raise and they can use that to actually raise the arm or move the fingers of someone who is paralyzed.
This is really powerful stuff that was, up until very recently,
entirely impossible.
So I think any discussion around AI,
we have to consider what the benefits are before we understand what happens when unscrupulous folks get their hands on it and,
and bad things could and may happen.
There's an ethical discussion here around all of the good that could be done versus all of the evil that could be done.
You know what Boston Dynamics is,
right?
The robot company,
I'm sure you've seen the videos of their Spot robot,
their Atlas robot doing the robot,
the robot,
the robot dogs were just wandering around New York City.
The NYPD finally got them.
It's interesting that they,
they let these robots out in very limited circumstances.
Most of the time you see this,
it's in a controlled environment in a video.
Right.
But it's funny that with conversational
AI,
we're just blasting it out into the world.
Yeah.
No controls at all.
Right.
And so it's like technology in a lab under demo settings can be very impressive,
right?
And certainly if I curate how an LLM behaves,
it can seem like
This is it! AGI is here.
There's papers suggesting it may be whispers of AGI but then I go and use it.
My primary use was the chat GPT 3.5.
But I've also used GitHub Copilot which is based off of uh is it DaVinci?
I forget what the model name is,
but it's like 3.5-ish I think GPT, too.
So four is out now and then GitHub Copilot is GitHub Copilot X.
I think they're calling it,
which is a GPT 4-based version that's coming out.
So like we've yet to see maybe the best of what LLM can do.
But what's interesting about this technology is it's kind of being tested out on social media with the public with no safeguards.
And I think there's a good reason they don't just let Atlas walk out into any random crowd because even the best design of a robot that size,
it could kill someone.
Right.
It could fall over crush someone.
It's a PR nightmare.
But yeah,
I've been watching AI for a while and it's interesting you brought up health care because IBM Watson was a spectacular failure that had to be sold off for parts.
Um But there have been successes with pre-transformer, as I understand it,
neural networks and deep learning,
deep neural networks do have impressive results.
But like it,
it really hasn't crossed the line of being like obviously useful in high scale, ready for the masses,
like even self-driving cars which are Pre-transformer.
Andre Andrej Karpathy is the guy I was thinking of who worked on self-driving computer vision for Tesla: really good, but it's not ready to be deployed nationwide.
I think it's level five self-driving that we still haven't really achieved.
And that's the part where we can really let it drive on the streets.
People have been hit by these cars and of course,
humans hit people with their car too so they can even be better than people.
But it's still not necessarily something that we're gonna trust.
There's a lot of psychology and politics and messy real-world stuff that I think people are wise to protect: new drugs,
um new military technologies from just throwing it out there.
But when it comes to LLMs,
we can't get it out in front of people fast enough.
And maybe that's because it partially trains on its input.
So the idea is to make it better by getting people talking to it.
Speaking of history also,
there's Tay which Microsoft released in 2014 and was turned into a Nazi hate speech spot within hours and R human learning,
I forget what the acronym for this new filter they have on Transformer-based LLMs to prevent them from going there.
That was a success.
But we've seen how these things being thrown out into the world can lead to disaster before. Yet,
That's still the approach that's being taken because it seems like everyone's really afraid to be last on this.
Which I think is a theme in the tech industry in general.
We are really anxious to get our feature out the door before someone else does.
And I work for a company whose motto famously was in the past,
"move fast and break things," and that has had to be updated and modernized a bit as our scale grew to the point where it actually mattered if you broke something and we need to actually make sure our infrastructure is stable,
et cetera, and maybe take that extra beat to make sure that what you're about the ship is not going to bring down the entire system.
I think there's a perception as well that things like this generative AI,
as long as it's like still just text,
I mean,
people are talking about images in a much more guarded way than they are about text.
And I think there's a reason there's a reason for that is that people just feel like text is not harmful.
A lot of people's interactions are text based and not knowing those conversations en masse aren't being influenced by nonhuman actors is something that is obviously a lot more dangerous than most people give it credit for. Even if it's not dangerous,
which I think it might be.
I,
I didn't say it wasn't dangerous.
I said that people perceive it as not being as dangerous as something like a missile,
you know.
Yeah,
I mean,
speech has consequences and if you can't trust communication,
that certainly has a lot of problems for a society,
right?
So there's people calling for the end of democracy as a result of these technologies.
Do I think that's really gonna happen?
I mean,
who really cares what I think?
Do I...
Have I been able to protect the rise and fall of governments?
I don't know,
I think those fears might be overblown but they certainly need to be thought through.
We're gonna find out the hard way if I'm wrong.
I mean,
there's Black Mirror episodes that foreshadow how poorly this could go,
right?
The crazy thing about Black Mirror when you watch it is how close to today,
it feels. When you watch those episodes,
the thing that freaks me out the most is that it doesn't feel like it's very far in the future.
What I want
to ask you,
David is like,
we've talked a lot about how this affects the world,
like,
generally,
but I am curious if you have any thoughts about how this affects our jobs and our work right now,
like,
especially the most common thought that I've heard like as a direct impact right now is that people don't want to hire junior engineers specifically because there's already kind of a bias against hiring junior engineers these days.
And then the perception is that well,
what code,
what these things can do right now is do all those tasks that you would normally hand off to like an inexperienced,
either intern or whatever or junior engineer who that would be,
the benefit would be that they would be learning how to do the job and learning your system.
So I'm not sure that it's necessarily a positive thing.
Let me put it this way.
Let's go.
Let's zoom back in time to a strange time called the end of 2022.
When the job market was very hot,
there was GPT there was transformers,
they were invented in 2017,
the transformer architecture.
But you know,
we,
we were just starting to see like Midjourney and image generation,
but like GPT 3 wasn't necessarily that impressive. It hit some,
some threshold that showed up.
It was it no end of November 2022 when Chad GPT went live.
So like the world didn't know. Whatever that advance is: we didn't know that was coming.
So you tell me,
were we obsessed at all of our companies about typing competitions to make sure everyone typed as quickly as possible?
Was that the bottleneck we were worried about in our productivity in the end of 2022? No one was dying to pump out more code faster.
In fact,
if anything,
some companies would say slow down and make more high quality code,
we don't want you to be a typist master,
right?
And a lot of companies aren't hiring junior developers because they're super productive for the team.
They're hiring junior developers because they're really important for culture.
They bring in fresh thinking,
they help the job satisfaction of seniors because they have some someone to teach.
So even if we're all going to be more productive with LLMs,
that wasn't a problem we really needed solved.
And we weren't desperate.
How do we get rid of these junior developers at the companies that hire junior developers?
At least there's certainly a lot of companies like,
how do we hire seniors and pay them as juniors.
Right?
Um The age old question. And who knows?
maybe ChatGPT will put pressure downward in our wages.
I doubt it.
It's not impossible.
Though,
right?
But I would put the question back to y'all.
Like,
what is the bottleneck in your organization?
What would make you move faster?
I'm guessing it's not:
If you could generate a class that does all the business logic you need in,
in one second.
I'm willing to bet there's still other things that you need to wait for and deal with before you can deliver.
I will answer your question around.
What's the biggest bottleneck at my job?
It is always information silos and teams that are working on very similar things in very similar spaces who don't know of each other's existence.
The biggest problem that we have,
especially at a large company like mine is work duplication,
slight differences in understanding of business requirements and/or uh disagreements on technical direction where the,
the hardest thing to do is for some adult in the room to get the right people talking to each other so that an agreement and a consensus can be reached.
And I have a hard time imagining how AI is ever going to do that.
Same here.
It's kind of like how Clinton said "It's the economy, stupid."
it's people,
it's people, stupid.
Like the problem is always people,
like at the end of the day,
it's the process to serve people a product that people need to make and then can use,
you know,
so AI doesn't really solve that fundamental problem.
Really.
Can I make a reference to the current box office
smash Oppenheimer?
Please do.
Yes.
As long as you don't spoil whether or not whether or not they make the bomb or not. At the end,
the top is spinning.
So you're not sure if it was a dream or not.
Um So I forget who I was listening to an interview from maybe a biographer.
This guy was talking about Oppenheimer that he was not the best physicist.
Like he was not one of the truly brilliant ones.
He was a very smart man,
but he was effectively seen as he was not even on the list.
Like why,
why is this a general picking this guy?
And like the reason he was such a good fit is that he could think like a physicist,
but he also could communicate with people.
He could talk to the generals and human talk from their perspective,
right?
He,
he was a people person too.
He had a more holistic skill set,
but he also was a physicist and could talk to physicists about things only physicists can,
right?
So I'd love to say I'm a brilliant software engineer but I don't know,
I might be just OK,
but I feel really good about my career future because I can talk to non-technical people about technical stuff.
And when people talk about technical problems,
I think,
well,
I work in a very highly regulated industry,
banking.
So I,
you know,
I think,
well,
there's legal implications to what we're talking about here,
there's regulatory implications.
How would the customer think of this?
How is it gonna fit into our existing infrastructure being able to jump between the nontechnical and technical?
I have a finance background.
So I can think about like financial concepts and then jump back between like what,
how are the classes and the threads and the HTTP TCP sockets gonna talk to each other?
But also like how does this work in a financial system?
How does this work in terms of financial regulation?
XY
business stakeholder is not gonna like this.
Even though it's a great idea,
they're gonna give us pushback for,
I don't know,
political reasons,
organizational reasons,
they have a different view of what the customer likes.
Uh There's a lot of things that could cause conflict even if I banged out the code instantly and it was perfectly unit tested and ready to go.
There would still,
I would have to go back to that ChatGPT LLM.
I would have to prompt,
engineer a new prompt because the requirements are going to change a lot.
So here we are in 2023 knowing the realities of the fact that your job is not just to write code but to meet business requirements and to convince the executives who are demanding those that your solution is going to meet them within all of the parameters that are necessary, within the constraints of the regulations of the realities of the financial system you're working in or whatever system your industry is, when that executive looks at you and says,
hey,
everybody's using AI now I need you to be using AI,
how do you, as an accomplished, smart people person and engineer,
figure out what to do with AI,
how to use it to your advantage without saying,
oh,
I'm farming out all of the hard stuff to a machine that can type faster than I can?
Well,
I guess sidebar,
I made myself use ChatGPT.
I didn't want to be ignorant.
I didn't want to miss out either.
Right.
I had it generate code.
It generated bugs
it couldn't even detect in its own code in some cases for real world use cases.
The script I wrote,
I asked to fix the bug,
it fixed it by creating a new bug.
I asked it to fix the new bug.
No,
my code's,
correct.
OK.
It's not.
I literally just ran it to the Python interpreter.
It's broken.
Yeah, people think AI
can't lie but it lies all the time.
Well,
it's actually been created to be confident,
to give answers confidently.
So that's kind of a feature when it's conversational but a bug when it's generating code.
So,
um,
we have autocomplete in IDEs or text editors,
right?
So we already have something similar to that idea.
That's what GitHub Copilot feels like.
It feels like an LLM-powered autocomplete.
But like I've had GitHub Copilot,
I've named functions.
That's,
that's how they want you to use it.
You name a function that describes what it's supposed to do and then it fills in code and then you can read it and sometimes it does very well.
Other times it has generated comments for me,
not code. Out of its corpus,
it just pulls out comments.
It seems like I don't need you to write comments for me.
I need you to write code,
right?
So I would say ChatGPT as it's released
Now.
In my experience,
it doesn't seem like a productivity enhancer.
But the question was if someone says I want you to use this, particularly if non technical people are bought in that,
OK?
We didn't have a problem with writing code fast enough maybe before.
But if we have this thing that's affordable,
that can make us code faster,
surely it'll make the business money,
right?
Well,
maybe it will and I'll give it a shot.
But I do have concerns that uh (maybe it's gonna get a lot better soon and some of these concerns will be allayed),
But if I have to spend so much time reading the code with suspicion,
like I mean,
I'm not necessarily suspicious of my code but boy do I generate bugs.
So,
and that's why I write unit tests and that's why I write run manual tests.
That's why I write integration and end-to-end tests with Docker whenever possible.
Right.
So I test my code to kind of make up for the fact that I tend to trust my code.
But also like when I'm writing,
Go,
I try to compile it,
make sure it works. And if it compiles,
then you ship it for sure.
Right.
So I guess like there is this burden of,
like this tool is really useful.
If it only makes mistakes,
0.1% of the time or 0.05% of the time,
it's really useful.
But if it makes mistakes,
1% of the time,
that actually might be a critical amount of errors that I can't really trust. In a financial institution,
1% is way too high of an error rate.
No.
Yeah,
I mean,
it's not necessarily going to translate into transaction accuracy because that would be completely intolerable.
But yeah,
I mean,
so there is a burden necessarily then it is kind of like having a junior developer,
a brilliant one,
maybe who's really good at doing coding challenges,
but they're not wise and they may not have deep knowledge so they can create deeply flawed errors.
So I guess if we were to update our software development lifecycle,
how we write tests,
how we write pipelines,
maybe we can make that not such of a big deal and maybe that's how these become really useful is we have we get better at testing our code.
So it's much easier to catch the 1% of errors these pump out.
So I can tell you one use case a counterpoint to yours.
Um And I think that maybe because Brian is,
I think the odd man out in terms of being like an optimist,
you and me and David are very skeptical and Brian seems always,
seems so much more optimistic than us with new technology.
He was probably hugged more as a child.
I think so.
Yeah,
we talked about before how I'm a baseball fan.
So I must have some level of self hate and also optimism buried deep within me.
Well,
I'm a baseball fan too.
I'm just,
just also deeply pessimistic at the same time.
So the point is when it comes to new technologies,
I've seen enough like hype explode and collapse,
you know,
in real time to like be skeptical. That was a long preamble to get to the point of saying like I think that my skepticism has actually made me better disposed towards ChatGPT and Phind because I don't have high expectations for what it can actually do.
And so what I use it for or have found a very good use case for is say I write a lot of Scala,
right?
Because I have I'm very smart and very handsome.
And that means that I write a lot of Scala code,
the long and the short of it is that Scala unfortunately,
because it's only used by smart, handsome people,
it's not a very popular language which means that there's not a lot of materials out there for it and it's a very verbose language.
So when you want to deserialize a JSON object,
you can't just say like,
oh give me whatever. You have to write out every field.
If you give that JSON object to ChatGPT and with a little bit of work,
you can get it to write you the class,
basically the case class to deserialize it and add all the fields to it.
And you say you give it to me in screaming snake case and you know,
I want all of the,
you know,
fields to be this,
that and the other thing and it will do a decent job of giving that to you.
And that's something that would take like 15 minutes to like do by hand because it's just a lot of typing.
So does it do a good job of deserializing the json array of all of your clients'
social security numbers?
It does actually,
and that's why I am quitting the podcast and going to go live on my,
on my island somewhere.
Um No,
but it does a good job of um of of doing things that are very tedious and very predictable and low stakes,
right?
It has served in many ways to me to be like a better Google because Google has also not been particularly helpful on a number of occasions because it's so,
so many results are buried behind ads and all that sort of thing.
Um And so if there's something that's like,
relatively obscure or I want it to give me like something in a specific format or with specific variable names,
something that you can't just Google,
that you would have to do it by hand to adapt something that you found on Google,
it can actually do that decently well,
because I have very low expectations for its ability to actually do,
You know what the hype says it will do.
I found a lot of use for it and I actually use it kind of as a kind of like I use Google.
Now a lot of the time. Phind (p-h-i-n-d) is actually will cite the sources that it uses,
which is pretty useful,
like it'll point you to the original like AWS article or the original like Stack Overflow article that it used to generate the results. Again it's in no way
perfect.
I'd say it's about on a good day,
20% faster than just Googling something or figuring it out on my own.
But,
you know,
writing bash scripts,
other other things that I never would want to like,
invest a lot of time in learning how to do properly,
right?
Like it's good at just giving me something to work with,
right?
If that makes sense.
Um So that's just my little spiel for like how I've personally been able to use it.
Yeah,
I think it's great at generating some boiler plate or telling me which module should I import in a,
in a language that I'm not familiar with, that kind of thing.
It can be helpful for.
Yeah.
But I would absolutely never trust it to actually write my production code.
But "never" is a long time.
And,
uh,
I feel that way now and I'm not sure I'll still feel that way in 24 months.
But we'll find out. We are actually getting pretty well over time here.
But David,
I'll bet that you've got a closing monologue for us and I want to give you some time to do that before I move us to Picks and Plugs.
All right,
I would just close by saying,
I think it'd be really in,
I mean,
these AI models are effectively giant statistical engines that have come up with the most likely association between tokens,
either words or pieces of words and the tokens can be different in a programming context.
So like,
they're as good as the training set.
So like if these products evolve so that I can,
like,
I'm gonna be writing in a Go code base.
So let's train it on the standard library or let's train it on our best Go repositories,
then it's more likely to generate variable names that sound like what I think good variable names are or our team has agreed on good variable,
right.
So I,
I do think one of the ways this could get a lot better is it becomes a lot more custom,
it becomes the organization's LLM and it runs in their infrastructure.
Now,
that's gonna require quite a lot of infrastructure available to only really large organizations right,
to essentially fork these models and then train them internally.
So that,
that proprietary code or proprietary process that you're feeding into,
it doesn't make its way out into the world. Totally.
Oh,
by the way,
they're also, think of the attack surface these represent for really clever hackers and all the damage they can do by getting the data or worse:
retraining your model without you knowing. Yikes. Big yikes.
So I think uh if you want me to wrap up in a statement,
I think no one really knows what exactly these are gonna be used for how useful they're gonna be.
Um It might turn out that they're really good at turn churning out marketing copy and really bad ads for the tabula area below the article you just read. Like I there every single um you know,
"fix it with this one weird trick" article now has clearly a Midjourney- or Dali-generated image next,
right?
And so the text is gonna start to all sound the same,
the image is gonna start to all sound the same.
So that might not turn out to be the real use case for these things.
Maybe they're gonna get a new ability,
we don't know it's changing every day how much of it is hype and how much of it is a marketing engine in a hyper drive.
We shall see.
But I guess the right thing for us to do is to remember what we're actually good at in our careers.
And that's probably being thinking humans that piece together the communication in a large organization with stakes.
So LLMs might be part of that day-to-day journey,
but they're not nece--,
I really don't see us getting replaced... next year.
Well,
on that note of optimism,
I am going to move us over to Picks and Plugs,
which is the time in the show where we talk about any of these resources that we're enjoying lately,
things we want to share with our listeners.
Uh don't have to be tech related and uh we'll start with you,
Dave.
Well,
of course,
mine is tech related.
Of course it is.
I've been signed up for HackTheBox.com.
Uh I think cybersecurity in the LLM world and even before it,
that's really the differentiator I'm interested in.
So Hack The Box is pen testing training.
They literally spin up servers inside their infrastructure and you can, either using a GUI or uh a .ovpn file,
you can VPN into their network,
so you get to literally attack their boxes.
I'm on the very,
very basic level.
I'm in the kiddie pool right now,
but I'm learning about how to use Microsoft uh command line uh tools I've never heard of, how to like do common attack vectors,
try out SQL injection,
try to guess passwords.
So really lets you through doing,
try to feel what it's like to attack a system and then therefore it becomes pretty obvious how you would defend.
And I think it's really cool and I'm gonna be turning one of my ubuntu laptops into a Collie Linux box pretty soon.
You're on your way to Red Team. Or to be the ultimate defender of our AWS VPC.
But,
you know,
whatever. Alex,
what do you got for us?
Um My recommendations are a little bit light uh today.
So I'm just gonna recommend the "Staff Engineer."
I'm going back through it since we're several episodes in. Everybody,
Read it.
It's good.
I've also a project that I've been working on which is not really a plug uh for a specific thing,
but I think,
but it's been very useful for me to learn how to do.
It's just getting better at like scripting and automating basic things.
I never thought that would be particularly sexy,
but like writing a little,
a little script that installs all my other scripts and it's,
and like will scaffold a project out for me.
Anything that I end up doing more than like a couple of times,
you know,
a day I started just trying to get into the habit of automating it and it's been,
it's at least a lot funner than just doing it over and over again and it's pretty cool what you can do with basic bash scripting.
So and ChatGPT will actually do that decently well,
not perfectly and... Bash scripting?
Yeah,
it's decently good at batch scripting.
Except that this is a perfect example of why it's not reliable at all.
I was trying to write a script that would generate a dummy video for me because the company I work for does a lot of social media.
We have to post a lot of like videos for um testing purposes.
I don't like to use real videos.
I'd like to just make a little or like have a script that wrote it,
made a little square that spins in a circle or something like that.
Perfect.
Uh I asked ChatGPT to make one for me and it recommended using a third party library.
And I said,
well,
I don't really want to install some random third party library on my laptop.
Um What could possibly go wrong with that?
So I told it,
how do I do that without,
how do I do that without installing a third party library?
It gives me this,
spits out this big old script.
I'm like,
OK,
run.
It doesn't work on Mac OS.
It only works on Linux. I say,
OK,
make it in Mac OS.
It says here you go.
Also doesn't work.
And then I go ask my friend who does film editing for a living.
And he says,
well,
why don't you just use this third party library?
That's what everybody uses.
The first thing that suggested. That and well,
and then the,
the point of this is where,
where the AI failed and why this was a useful learning experience for me in terms of the limitations.
Is that a human:
Literally the first human I asked about this said,
this is the standard.
You're an idiot for trying to do it another way.
The AI was too eager to basically,
too eager to please.
It would only try to meet the exact requirements of the question that I asked it without having the brains.
And this has happened many times that I've used this over the,
the past year or however long we've all been doing this in this timeline.
Um And is that if it's wrong or if it's on the wrong tack or you're trying to get it to do something differently?
It won't understand that it is um that the answer is that there's a that there that YOU were wrong.
It just doesn't understand that I hadn't framed the question properly.
Um And that's my,
how I transformed uh plugs into a long winded story about how smart I am.
You are a smart, buddy.
Uh So I'm gonna actually take my pick in a little bit different direction since we've been talking about AI, and AI often uses these buzzwords,
"deep learning."
I want to talk about actual learning in my own human brain.
Uh My wife and I have been working on,
uh learning French lately and I'm still not,
not very good,
but my initial efforts were using several technologies that you've probably seen advertised or heard advertised on the commercial breaks of some of your favorite podcasts which are lucky enough to be sponsored.
But,
um,
you know,
I had not much success using those things because they were very much like "here is a picture and here is a thing to say about that picture."
And I knew how to say that thing about that picture.
But then when an actual human being walked up to me,
that didn't look like a picture I had seen in that app,
I didn't know what to do.
So,
um I found uh through some friends' recommendations,
there's a website that will actually pair you up with an actual teacher where you can have an actual lesson with uh with a person who is a native speaker.
And um this is how I've actually started to make some real strides.
The website is Verbling.com and it's been really great.
Um I got matched up with a teacher who is in Tunisia.
His name is Mourad.
By the way,
I,
I'll,
I'll go ahead and plug him personally because he's been uh a really excellent teacher and a,
a really fun way to do this that actually involves interacting in actual conversations and not with say an AI but with an actual human being who knows how to teach--actually is trained as a teacher.
So,
pretty cool.
Um So that's uh that's what I'm doing at 5:45 a.m. on most Wednesdays is having a French lesson and then I spend most of the rest of the day wishing I could go back to bed.
But it's,
it's still kind of the only way I found to make progress.
So I highly recommend.
All right.
Well,
folks,
thanks so much David for being here on the podcast.
We really appreciate you coming to share your thoughts on this and some insight.
I didn't get to get to the transcript of the Twilight Zone episode
I was gonna go over.
But next,
next time,
next time we'll get to it when we release the after-dark version,
we'll,
we'll add that in the Snyder Cut.
All right,
David.
Thank you again.
Uh Great,
great,
great talking with you.
Thank you. Listeners,
Thanks for tuning in.
We'll catch you next time.
Thanks for listening to The Main Thread.
As always the views expressed by the hosts and guests of The Main Thread are our own and do not reflect those of our respective places of employment.
Are you enjoying the podcast?
Do you have a topic you'd like to hear discussed in a future episode?
Well,
please reach out to us with any suggestions or feedback.
We are @themainthread on all major social media platforms including Facebook,
Twitter,
Instagram,
and of course Threads. If you prefer,
you can send us an email at mainthreadpodcast@gmail.com.
For links to resources mentioned in today's episode or to download a transcript of the podcast.
Please refer to our show notes. That's all for today's episode.
Thanks again for listening and we'll catch you next time on The Main Thread.

Copyright 2023 All rights reserved.

Podcast Powered By Podbean

Version: 20240320