Basic Concepts

prompt = lambda persona, context, query: \
f"""<Persona>
{persona}
</Persona>

<Context>
{context}
</Context>

<Query>
{query}
</Query>

<Response>
"""

Retrieval Augmented Generation

Enhance model responses by providing relevant external information:

def rag_prompt(
  query, retrieved_contexts=[],
  instruction="Answer based on the provided context."
):
    context_section = "\n\n".join([
      f"Context {i+1}:\n{context}"
      for i, context in enumerate(retrieved_contexts)
    ])

    return f"""Retrieved Information:
    {context_section}

    Question: {query}

    {instruction}"""

Chain of Thought

Guide the model through complex reasoning:

def chain_of_thought_prompt(problem, steps_required=True):
    return f"""Problem: {problem}

{'Please think through this step-by-step and explain your reasoning for each step.' if steps_required else 'Solve this problem by showing your work.'}"""

Self-Ask

Enable recursive problem-solving:

def self_ask_prompt(question, allow_search_queries=True):
    return f"""Question: {question}

To solve this problem, I'll break it down into smaller questions and answer them one by one.

{'''If you need to search for specific information, format search queries as [SEARCH: your query].''' if allow_search_queries else ''}

Let me think through this carefully:"""

Self Improvement

Create a feedback loop for prompt optimization:

def self_improvement_prompt(original_prompt, model_output, goal):
    return f"""Original Prompt:
\"{original_prompt}\"

Output Received:
\"{model_output}\"

Desired Goal:
\"{goal}\"

What are the weaknesses of the original prompt? How could it be improved to better achieve the desired goal?

After analyzing the weaknesses, provide an improved version of the prompt."""