Prompt engineering  - Instruction based LLM

Prompt engineering - Instruction based LLM

Play this article

Prompt engineering is a technique used to fine-tune and customize language models such as OpenAI's GPT-3 for specific tasks. This technique involves crafting prompts or inputs that guide the model toward generating more relevant and accurate outputs. In this blog post, we will explore how prompt engineering can be used to enhance the performance of GPT-3 for various NLP tasks.

Prompt engineering principle:

  1. Practice writing clear and specific instructions

  2. Give the modal some time to think

Best prompt engineering can be achieved by an iterative process.


  1. Be clear and specific

  2. Analyze why the result does not give the desired output

  3. Refine the idea and prompt

  4. Repeat

To achieve the best result always refine the prompt with a batch of examples.

Use cases for Prompt engineering:

  1. Summarizing text - For example, summarize meetings or summarize customer reviews on products.

  2. Inferring - For example, extract the sentiment of the text. (Like Positive or negative reviews )

  3. Transforming - For example - Take inputs in one language and transfer them to another language.

  4. Spelling fix

  5. Check review

  6. Expand a sorter text to longer tex

  7. Chatbot -(Any chatbot like Tiffin service order chatbot)

    And many more...

I hope this article will give some ideas for prompt engineering. In the next article, how can we implement using Python, open AI modal(3.5 turbo), next Js, and React?

Did you find this article valuable?

Support Vijendra Rana by becoming a sponsor. Any amount is appreciated!