$ Prompty

About Prompty

Overview

Prompty is an interactive tool for creating and refining LLM (Large Language Model) prompts through an iterative feedback process. It helps users craft effective prompts by leveraging the power of AI to generate and refine prompt drafts based on user feedback.

Features

How It Works

1

Define Your Goal

Start by describing what you want to achieve with your prompt and selecting a prompt type that matches your need.

2

Initial Draft

Prompty generates a first draft based on your goal, applying best practices for prompt engineering.

3

Review & Feedback

Evaluate the draft and provide structured feedback on what works well, what needs changing, and what's missing.

4

Refinement

Prompty refines the prompt based on your feedback, creating an improved version that addresses your concerns.

5

Iteration

Steps 3-4 repeat until you're satisfied with the result, allowing for continuous improvement.

6

Final Prompt

Once approved, your final prompt is ready to use with any LLM and can be exported in various formats.

Technology Stack

Backend

FastAPI provides the core API with WebSocket support for real-time communication. Built with Python and async capabilities for performance.

Frontend

Flask web server with xterm.js for terminal emulation, creating an authentic command-line experience in the browser.

Architecture

Containerized microservices using Docker, with Nginx as a reverse proxy for efficient routing and deployment.

AI Integration

Leverages OpenAI's GPT models through their API to provide intelligent prompt generation and refinement.

Prompty is ideal for both beginners and experienced users who want to create effective prompts for any LLM application, from creative writing assistance to code generation to specialized knowledge tasks.

This web version builds on the original CLI tool, adding real-time updates, a more accessible interface, and improved export capabilities while maintaining the terminal aesthetic that prompt engineers love.