Navigating the AI hype. Article by Daniel Hardej.

Why I’m still on the fence.

The verdict is still out on AI.

There’s never been a better time to start learning software engineering. The resources…

At the same time, there’s never been a better time to be lazy and have everything done for you and learn absolutely nothing.

We’ve heard a lot of audacious claims like:

“Don’t learn how to code”

“AI will replace developers”

“AI will replace jobs”

“AI will boost productivity”

Not all of these are true. But AI might take over the world, it’s just that its takeover might be more boring than what you’re imagining right now.

What most people think will happen: apocalypse!

What will actually happen: people will become over-reliant on it. Some people will continue to be talented, using it to enhance their productivity or speed at which they work; others will be useless without it (that’s the bad part.) It could also be used maliciously.

The only honest assessment of the situation isn’t “AI will take over”, and it isn’t that AI is overhyped and will never amount to anything.

It is unfortunately much more boring: we have no idea how things will turn out.

Sure, a few interesting things have happened:

High school students that were previously complete schmucks are suddenly writing good essays.

Shit developers started writing marginally better code.

Good developers became slightly more productive.

There were some bad things too:

Being able to mimic voices and images to copy people’s likeness made it easier for people to scam others.

But putting the doom and gloom aside for just a moment…

AI, even in its early stages, made some awesome leaps.

But it’s also produced some absolute shit.

The term generative AI is descriptive. But is the content it produces good or bad? So far, it’s been both.

Ironically, the claims made with so much conviction and certainty are making us more confused.

On one hand, we’ve got the Westpac case study – faster code, more secure.

And on the other hand, we’ve also got reports that our AI pair programmer has boosted dev happiness.

Something sorely needed if Stack Overflow’s recent surveys are to be believed.

There have been claims that AI-generated code introduces bugs and security vulnerabilities.

GitHub is taking aim at that problem.

GHAS, for example, is getting an infusion of.

Turning the AI coding assistant.

25% of all Google code…

You may check and read more about it.

Maybe you trust asdfman123 more than Sundar, but this still accurately describes the typical experience of a developer using AI productivity tools.

25% doesn’t sound like much, but for a whole team or division that could be a lot of productivity, or time, or money.

But most of what we hear is quantitative data.

Not as much (although there is a little bit out there) about the qualitative aspects is what it’s doing and how well.

The complexity of the use case.

How well do you know your domain?

Is it really complicated?

Sometimes you get people who understand an archaic or esoteric field really well, one that has problems that would be solved by software, but don’t know anything about software engineering.

Or the other way around – you have talented engineers that know nothing about a particular industry or domain and its problems that need to be solved.

What about things like Cursor?

The Magic Is GONE – YouTube.

Don’t get me wrong. I’m still bullish on AI.

And a lot of other things.

Unfortunately, a lot of the most important tools and the biggest money makers are sometimes the most boring, end up rarely getting the hype and attention ai gets.

The technological singularity (of stupidity).

In his 2015 book, Murray Shanahan wrote in the technological singularity…

The idea of technological singularity, and what it would mean if ordinary human intelligence were enhanced or overtaken by artificial intelligence.

An idea introduced by Ray Kurzwiel.

What’s been happening so far?

Idiot essays from ChatGPT?

Data privacy leaks?

Lazy students and developers?

Apple Intelligence rolled back after doing dumb stuff…

When you don’t understand what you’re doing, using AI to develop software can make you an illiterate programmer. You end up shipping unmaintainable code and increasing technical debt.

How will AI change us?

It won’t be some singular bot catastrophic event that destroys everything.

It’ll be a much more mundane series of events where we destroy ourselves.

We’ll continue to consume bland, often meaningless, and oversimplified information. Most convincing, but likely wrong or at least incomplete.

We’re the product of the information and media we consume!

Better music makes us better musicians.

Better art makes us better artists.

Better literature makes us more literate (in that it literally makes us better readers and writers).

Difficult, paradoxical, or cognitively dissonant concepts make us wiser.

Lazy students will get used to having AI write essays for them.

Lazy coders won’t learn how to solve hard problems.

Some won’t be able to differentiate between AI-generated nonsense and actual information.

And then what happens when we start buying into the moralising of an AI LLM (or that of the people who created it, and their biases)?

What is GitHub doing?

  1. Copilot workspace

  2. GHAS copilot

  3. Models

  4. AI code reviews

The DeepSeek elephant in the room.

Despite being on the receiving end of a lot of completely valid criticisms, DeepSeek did something good: it opened up a powerful model for free, made it open source, and in doing so revealed that you don’t need preposterous amounts of money raised at equally preposterous valuations to build AI software. It smashed a lot of moats.

It did create a mess. But in doing so it might just have helped us narrowly avoid a dotcom bubble/crash moment. (Although we’ll see how all of our tech companies respond before we know for sure).

In the near future (and now), the hardest work isn’t the things that AI is doing or what it’ll be able to do. It’s the process of listening to customers.

It’s hard because you can’t and shouldn’t listen to what everyone wants and then try to build it. It’s hard because you need to have a lot of people listening, discerning what’s a good idea, people to prioritise, people to plan, and then people to build. A lot of companies aren’t doing this right.

Certainly, AI tools will be useful for some of the tasks under this high-level process, but getting it right requires human competence, not artificial intelligence.

Think about what happened after the 2000 dotcom bubble.

Finally, when to use AI for programming?

Starting from scratch – you have an idea, but absolutely no code. Get something basic built with an AI assistant, then get creative and build the new things yourself.

Understand the nuances of a new language you’re not familiar with.

Re-remembering: things you once knew, but you forgot.

Translating: this legacy code is in X, but I need it in Y…

Confusion: I know psql, but not MySQL – they’re similar but not the same!

AI is here to stay, but we’ve still got a long way to go.

Proudly not written by ChatGPT!

Cheers from the Author.

Bio:

Daniel Hardej is an Engineer, Entrepreneur, and Coffee Connoisseur based in Perth, Australia, currently working as a Technical Support Engineer at GitHub. He is part of the customer success team and serves as a “Copilot Champion,” actively promoting and working with GitHub’s AI-powered programming assistant, GitHub Copilot. His role involves supporting users, contributing to the development of AI tools, and sharing insights on technology and productivity

Oryginally pubished at:

Subscribe to Spektrumlab.io
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.