Empromptu Docs
  • Welcome
  • Getting Started
    • Take a Tour of Empromptu
    • How do you define Accuracy?
    • Quickstart
      • Quickstart - Archive
  • Advanced Configuration
  • No App, Use Prompt Optimizer
  • Examples
  • Basics
    • Projects and Tasks
    • Prompts Overview
      • Manual Optimization
      • Automatic Optimization
    • Input Optimization
    • Edge Case Optimizer
    • Evaluations
    • Models
    • Data Generator
    • Core Concepts
    • Usage Guide
      • Data Viewer
      • Performance Over Time
      • Prompt Explorer
      • Show Input Space
      • Prompt Performance
      • Prompt Playground
    • Best Practices
    • FAQ
    • Troubleshooting
Powered by GitBook
On this page
  • In the prompt registry function
  • In the eval statement
  • Embeddings
  • General

Advanced Configuration

PreviousQuickstart - ArchiveNextNo App, Use Prompt Optimizer

Last updated 2 months ago

If you are feeling fancy you can customize how you send information to Empromptu. Use this config guide to add customizations to your prompt registry function.

If you have not replaced your static prompts with Empromptu's prompt registry function, visit the .

In the prompt registry function

#Remember this is the base function that you use to feed prompts from Empromptu in to your code 
prompt_context = prompt_registry.get_prompt(prompt_family_name,input_data)
prompt_registry.set_autolog()
ModelUtils.score_and_log(prompt_family_name,prompt_context)

You can optionally choose to make the following modifications

Config
Description
Code

Model

You can change the model

Temperature

You can change the temperature

Prompt Language

In the eval statement

#Remember if you have defined your eval in your code, it looks like this

You can optionally choose to make the following modifications

Config
Description
Code

Random

You can set your initial task to random

Request

You get the best prompt from among the choices that exist

Custom Request

You can tell the request function that you want to optimize using something other than the main eval score

Embeddings

You can optionally choose to make the following modifications

Config
Description
Code

Embedding

You can set a custom embedder

General

You can optionally choose to make the following modifications

Config
Description
Code

Latency

You can change the latency

Run Token Stats

You can run token stats separately

Turn off Debugger

You can turn off the debugger

Turn off Auto Logger

You can turn off the Auto Logger

prompt_context = prompt_registry.get_prompt(prompt_family_name,input_data)
prompt_context = prompt_registry.get_prompt(prompt_family_name,input_data)
prompt_context = prompt_registry.get_prompt(prompt_family_name,input_data)
prompt_context = prompt_registry.get_prompt(prompt_family_name,input_data)
quickstart guide here