Have you ever had a discussion with your smart speaker? You know the kind where you end up shouting something awful in frustration because the device doesn't understand what you are asking.

Eventually you come to a point where you are specific. So specific about how to proceed the device actually gets its. It's exhausting enough that I use Chrome or Edge with YouTube Music because it can cast to the speakers instead.

LLM to the rescue?

This is how is I see people using the Large Language Model (LLM) services. You need to specify so much in order to get anything useful back. It seems like a job.

That people that are using the current set of LLMs are specifying what they want, what they don't want, how to format the results, and a number of other constraints. At some point it just seems prudent to write the text yourself.

Don't be a Luddite

I think there are use cases where an LLM makes perfect sense. Writing Tweets is not one of them.

It is my go to service for creating any mildly complex regular expression. After getting my response I plug that into the existing regex testers. 

I don't have the need for language translation but I can see that being rather helpful for software internationalization work.

Summarizing text seems to a natural fit as well.

I do not think the current conversational input is valuable for accomplishing most work.

Over time as the tools get better it will become easier but for now I'm just going to write my own code and text for whatever you are working on.