The Prompting Gold Rush Has a Very Short Window

The Prompting Gold Rush Has a Very Short Window

AI Skills LLM
Wojciech Zieliński Jul 15, 2025

Prompt engineering is the hottest new skill—but for how long? LLMs are evolving so fast that today's complex prompt formulas might be redundant tomorrow. Read why this 'future proof' skill has an expiration date.

Is AI Prompting a Skill of the Future?

Training courses on writing AI prompts for LLMs seem to be flooding every online advertising channel right now. Specialists of various kinds (and quality) are training their students on how to write queries for Copilot, ChatGPT, Gemini, Claude, Perplexity, and all the rest of the LLMs so that they best understand the user's intentions.

At the same time, prompting is often perceived as a "new" skill, without which the future job market might look rather... bleak. However, is the ability to clearly convey one's intentions to an interlocutor (whether physical or virtual) really such a new skill that it needs to be taught in courses and training sessions? Or perhaps, in an increasingly virtual world, we are slowly losing the ability to articulate ourselves clearly and concisely, and this arcane knowledge of creating commands for LLMs is actually just relearning something we used to know but have lost today—whether due to lack of necessity ("𝘢𝘯 𝘶𝘯𝘶𝘴𝘦𝘥 𝘰𝘳𝘨𝘢𝘯 𝘢𝘵𝘳𝘰𝘱𝘩𝘪𝘦𝘴") or plain laziness ("𝘭𝘦𝘵 𝘵𝘩𝘦𝘮 𝘨𝘶𝘦𝘴𝘴 𝘸𝘩𝘢𝘵 𝘐 𝘮𝘦𝘢𝘯𝘵")?


And another question: Considering the speed of AI development, especially in the field of LLMs, will today's prompt engineering training—which teaches that a model command should include things like "𝘢𝘤𝘵 𝘢𝘴 𝘢 𝘴𝘱𝘦𝘤𝘪𝘧𝘪𝘤 𝘱𝘦𝘳𝘴𝘰𝘯", a detailed description of our needs, and finally an emotional element ("𝘪𝘧 𝘺𝘰𝘶 𝘥𝘰 𝘵𝘩𝘪𝘴 𝘸𝘳𝘰𝘯𝘨, 𝘐'𝘭𝘭 𝘨𝘦𝘵 𝘧𝘪𝘳𝘦𝘥", "𝘵𝘩𝘢𝘯𝘬 𝘺𝘰𝘶 𝘧𝘰𝘳 𝘺𝘰𝘶𝘳 𝘩𝘦𝘭𝘱")—have any application tomorrow?

Large language models are already efficiently handling conversational context and can adapt to their interlocutor based on previous conversations. I have a feeling that very soon they will also be able to efficiently predict what even a less articulate user meant, without needing a structure straight out of functional specifications or #userstories.


And if that looks to be the case, then perhaps if we want to make money teaching people how to articulate themselves clearly, we currently have a rather narrow window to do so? Because very soon, advanced prompts won't be needed at all?

Wojciech Zieliński

Wojciech Zieliński

Strategic Technology, Delivery & Transformation Architect

Seasoned technology executive and transformation leader dedicated to bridging the gap between high-level business strategy and complex engineering execution. Specialized in stabilizing volatile IT environments, scaling agile delivery across international borders, and mentoring the next generation of technology leaders. Whether acting as a Fractional CTO or an Interim Program Director, establishes the operational discipline and strategic oversight needed to drive predictable, high-value outcomes in the most demanding industries.