Learning to Use AI Before It Comes with a Playbook

I had been hearing about machine learning and AI for years before LLMs became part of everyday work. Inside the company, we had already gone through several waves of trying to extract value from data. Some of it involved models, some of it was closer to analytics and automation, but the same question kept coming back: was it actually creating value in a way that justified the effort?

One of the earlier examples was a model we built to optimize what we showed to users. In one sense it worked. We were able to increase clicks and get more people to the join page. But the full loop did not hold. The conversion after that was too low, and the cost of running the model ended up being higher than the value it generated. Eventually the project was stopped.

We explored other things after that: recommendation tools through third parties, data pipelines to understand actual usage instead of relying only on gut feeling, and experiments around what people were paying attention to and how to turn that into something more useful. Some of those efforts were worthwhile, but they often had the same limitation. We could improve something locally without being fully sure it mattered at the business level.

That was part of the backdrop when LLMs started becoming widely available. What changed was not only the technology itself. It was that AI stopped being something mostly discussed through specialized projects and became a tool people could try directly in their own work. Instead of AI being something a small group experimented with on behalf of the company, it suddenly became something almost anyone could use to draft a document, summarize information, explore an idea, prepare a message, or help structure a task.

Development was an obvious area, and a lot of attention went there first, but it was never only about code. The real change was broader than that. People across the company could start testing these tools against the actual shape of their day-to-day work, and that created immediate friction. We already had security guidelines around what could be shared in documents, emails, and other systems. Then suddenly we had tools where people could paste almost anything into a prompt.

That was not a theoretical concern. We were already dealing with basic issues like people sharing information in places they should not. So from that angle, LLMs felt less like a clean upgrade and more like a new source of risk that had arrived before most organizations were ready for it. It did not help that enterprise controls were still immature in many of the tools people wanted to use. Access was fragmented, people were buying their own subscriptions, and interest was growing faster than governance. Eventually we moved toward company-wide access, not just for developers but more broadly, once the tools were mature enough to support that.

Looking back, I do not think the messy start was entirely a bad thing. A lot of people wanted access before they really knew how they would use it. Part of that was curiosity. Part of it was not wanting to fall behind. On paper, it would have been easy to say that access should wait until there was a clearer structure, better training, and more defined use cases. In practice, I think that would have delayed the real learning.

Tools like this are hard to understand from a policy document or a short demo. People learn them when they start trying to solve something that matters to them. When you are your own stakeholder, the feedback loop becomes much more concrete. You can tell when the tool is helping, when it is slowing you down, and when it is giving you something that still needs too much correction to be worth it. That is also why I think organizations should encourage people to start thinking early about where AI could fit into their work, instead of waiting for a perfect recipe to arrive with it.

If people stay passive and wait for someone else to define every valid use case, they may get access to the tool without ever really learning how to use it. For me, that learning took time. It was not just about discovering prompts or comparing products. It was more about developing a feel for the kinds of tasks where the tool could actually help. Repetitive tasks were an obvious place to start, especially when there was already a recognizable pattern to the work. Drafting was another. Summaries, restructuring information, exploring options, and preparing a first pass that I could then review and refine all turned out to be useful in different ways.

Over time, I started noticing that the value was often less about replacing the work and more about reducing the friction to get moving. That same logic applies well beyond development. There is a lot of focus on generating code faster, and in some contexts that value is real. But the larger opportunity is in all the work around the code, and beyond it: writing, analysis, synthesis, preparation, communication, and other forms of computer-based work where a useful first draft or a structured response can save time without removing accountability.

What matters is learning where the tool fits and where it does not. A good candidate is usually a task you do often enough to recognize the pattern, where the result can be reviewed quickly, and where the risk of a bad answer is still manageable. That does not always require an LLM, but LLMs are often useful when the task includes language, ambiguity, or rough structure rather than fixed rules alone.

The harder part is that convenience can quietly become dependency. If you delegate too much, especially in areas where judgment matters, you risk becoming less connected to the work itself. That may not show up immediately. In some cases it comes back quickly once you re-engage. In others, especially when the domain is changing fast, stepping too far away from the underlying work can make it harder to notice what has changed. That is one of the tradeoffs I keep in mind. Saving effort is useful, but not if it slowly weakens your ability to think clearly about the work you still own.

It also brings back an older question for me. Just because a tool makes part of the work feel faster does not automatically mean it is creating meaningful value. Earlier AI efforts taught me that local improvement and real impact are not always the same thing. LLMs may be easier to adopt and easier to feel in day-to-day work, but the question still matters: what is the actual value once you compare the gain to the cost, the review time, the risk, and the loss of attention that can come with leaning on them too much?

That is why I keep coming back to the same point: these tools are useful, but they are still tools. They need practice, judgment, and boundaries. Security still matters. Context still matters. Review still matters. If the tool helps you think better, move faster, or reduce repetitive effort, that is valuable. But I do not think the goal should be to hand over the wheel completely.

What changed for me was not that AI suddenly had all the answers. It was that it became accessible enough to be learned through everyday use. That only works, though, if people are willing to spend real time with it, stay alert to where it helps, and keep responsibility for the result. Used that way, it can open up new ways of working. Used carelessly, it can just as easily create noise, risk, and distance from the work you still need to understand.

- Patrick