Ghost: A Web Site for Our Times?

I'm producing one the other day, which I think would be suitable for use cases related to AI. A ghostfrom ghost.build, describes itself as “the first database build for agents.”
Ghost is a Postgres database “first agent” that allows developers and AI agents to easily create, fork, test, query, manipulate and delete entire databases.
And it's completely free to use.
The developers who built it had a simple idea: if agents are going to build software, test migrations, test schemas, use SQL, and examine data, then they need scalable and editable databases like code sandboxes. This is where Ghost comes in.
When you create a Ghost database or fork an existing one, those databases reside on Ghost's Cloud infrastructure, not on your local system. Ghost is particularly useful for testing, modeling, agent workflows, branch databases, migration tests, and redundant database scenarios.
Managed databases are traditionally designed for long-lived production infrastructure. You create an instance, configure networking, manage data, connect applications, and manage databases as critical and vulnerable. Ghost retains the power of Postgres, but adds a workflow that feels much closer to modern agent development: create a database where needed, fork it when you need a single copy, run SQL against it, check the schema, try different table configurations, and drop it when you're done.
This makes Ghost very suitable for AI tools like Codex and Claude Code. Those tools can consult code, write migrations, debug queries, generate seed data, inspect logs, and use MCP tools. Ghost's built-in MCP server gives Codex the ability to directly manage database data, rather than forcing the agent to rely on vague commands, copied connection strings, or manual dashboard work.
Throughout this article, I will teach you how to install Ghost on your local system. We will then look at four practical examples of using Ghost with the Codex agent and its CLI.
NB I have no affiliation or relationship with the company or team behind Ghost or the Ghost product itself.
What is required
I assume you already have one of the Ghost-supported code agents installed on your system. This includes:
Claude Code
Codex
Cursor
Gemini CLI
Google Antigravity
Kiro CLI
VS Code
Windsurf
You'll also need a GitHub account, which you'll need to give Ghost access to when you sign in.
Featuring Ghost
On Linux, WSL on Windows, or macOS, you can use the following curl command.
$ curl -fsSL | sh
If, like me, you're on Windows, you can run this command in the PowerShell terminal.
PS C:Usersthoma> irm /install.ps1 | iex
Then run,
PS C:Usersthoma> ghost login
Opening browser for authentication...
Found space: nj5scy2orp
Successfully logged in as [email protected]
You will be presented with a screen like this. Authorize Ghost to continue.
After you've logged in properly, the next thing you should do is make the Ghost MCP server available to your favorite encryption agent. In my case, I use Codex.
Type the following in the command line and select your agent from the displayed list.
PS C:Usersthoma> ghost mcp install
Select an MCP client to configure:
1. Claude Code
> 2. Codex
3. Cursor
4. Gemini CLI
5. Google Antigravity
6. Kiro CLI
7. VS Code
8. Windsurf
Typing: 2
Successfully installed Ghost MCP server configuration for codex
Configuration file: C:Usersthoma.codexconfig.toml
Next steps:
1. Restart codex to load the new configuration
2. The Ghost MCP server will be available as 'ghost'
PS C:Usersthoma>
Check that your setup is OK
- Type the following to check if Ghost MCP is installed correctly.
PS C:Usersthoma> ghost mcp list
TYPE NAME
prompt design-postgis-tables
prompt design-postgres-tables
prompt find-hypertable-candidates
prompt migrate-postgres-tables-to-hypertables
prompt pgvector-semantic-search
prompt postgres
prompt postgres-hybrid-text-search
prompt setup-timescaledb-hypertables
tool ghost_connect
tool ghost_create
tool ghost_create_dedicated
tool ghost_delete
tool ghost_feedback
tool ghost_fork
tool ghost_fork_dedicated
tool ghost_list
tool ghost_login
tool ghost_logs
tool ghost_password
tool ghost_pause
tool ghost_rename
tool ghost_resume
tool ghost_schema
tool ghost_sql
tool ghost_status
tool search_docs
tool view_skill
Check if your agent can use the Ghost MCP server. You should see output like this (for the Codex).
PS C:Usersthoma> codex mcp list
Name Command Args Env Cwd Status Auth
ghost C:UsersthomaAppDataLocalProgramsGhostghost.exe mcp start - - enabled Unsupported
Assuming everything is OK, we can move on to more examples of using Ghost with its CLI and agents with its MCP server.
Example 1 – Creating sales data
Open your code agent and type the following into it.
Codex> Create a ghost database called sales_data that stores 10,000 dummy sales
records. In the database, create whichever tables are suitable for such
data. Do not include any indexes, primary or foreign keys for now.
I found this output in the Codex.
Worked for 1m 32s
Created a fresh Ghost database named sales_data.
It contains 10,000 dummy sales order records and supporting tables:
customers 1,000
products 150
sales_reps 30
sales_orders 10,000
sales_order_items 30,000
Per your instruction, I did not create any:
primary keys
foreign keys
indexes
Verification confirmed there are no primary key constraints, no foreign key
constraints, and no indexes on the sales tables.
As you can see, in my system, this took just over a minute and a half, which is an impressive statistic for an accomplishment.
Example 2 – Using the Ghost CLI
Now that we have a database to play with, let's try out some of the Ghost CLI commands. Note that these instructions usually won't need to be used as we can do everything I'm going to show you with Codex and Ghost MCP, but it's useful to know how in any case.
PS C:Usersthoma> ghost list
ID NAME STATUS STORAGE
sip3qcs40a sales_data running 211MiB
- Look at the database tables and structure
PS C:Usersthoma> ghost schema sales_data
DATABASE: sales_data (sip3qcs40a)
TABLE: customers
customer_id BIGINT GENERATED ALWAYS AS IDENTITY
customer_name TEXT NOT NULL
email TEXT NOT NULL
country_code TEXT NOT NULL CHECK ((length(country_code) = 2))
segment TEXT NOT NULL CHECK ((segment = ANY (ARRAY['consumer'::text, 'small_business'::text, 'enterprise'::text])))
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT now()
TABLE: products
product_id BIGINT GENERATED ALWAYS AS IDENTITY
sku TEXT NOT NULL
product_name TEXT NOT NULL
category TEXT NOT NULL CHECK ((category = ANY (ARRAY['software'::text, 'hardware'::text, 'services'::text, 'training'::text, 'support'::text])))
base_price NUMERIC(10,2) NOT NULL CHECK ((base_price > (0)::numeric))
active BOOLEAN NOT NULL DEFAULT true
TABLE: sales_order_items
sales_order_item_id BIGINT GENERATED ALWAYS AS IDENTITY
sales_order_id BIGINT NOT NULL
product_id BIGINT NOT NULL
quantity INTEGER NOT NULL CHECK (((quantity >= 1) AND (quantity <= 6)))
unit_price NUMERIC(10,2) NOT NULL CHECK ((unit_price > (0)::numeric))
discount_amount NUMERIC(10,2) NOT NULL DEFAULT 0 CHECK ((discount_amount >= (0)::numeric))
line_total NUMERIC(12,2) NOT NULL CHECK ((line_total >= (0)::numeric))
TABLE: sales_orders
sales_order_id BIGINT GENERATED ALWAYS AS IDENTITY
customer_id BIGINT NOT NULL
sales_rep_id BIGINT NOT NULL
order_status TEXT NOT NULL CHECK ((order_status = ANY (ARRAY['pending'::text, 'completed'::text, 'shipped'::text, 'cancelled'::text, 'refunded'::text])))
ordered_at TIMESTAMP WITH TIME ZONE NOT NULL
payment_method TEXT NOT NULL CHECK ((payment_method = ANY (ARRAY['card'::text, 'bank_transfer'::text, 'paypal'::text, 'invoice'::text])))
currency_code TEXT NOT NULL DEFAULT 'USD'::text CHECK ((currency_code = 'USD'::text))
subtotal_amount NUMERIC(12,2) NOT NULL DEFAULT 0 CHECK ((subtotal_amount >= (0)::numeric))
tax_amount NUMERIC(12,2) NOT NULL DEFAULT 0 CHECK ((tax_amount >= (0)::numeric))
shipping_amount NUMERIC(12,2) NOT NULL DEFAULT 0 CHECK ((shipping_amount >= (0)::numeric))
total_amount NUMERIC(12,2) NOT NULL DEFAULT 0 CHECK ((total_amount >= (0)::numeric))
TABLE: sales_reps
sales_rep_id BIGINT GENERATED ALWAYS AS IDENTITY
rep_name TEXT NOT NULL
region TEXT NOT NULL CHECK ((region = ANY (ARRAY['north_america'::text, 'europe'::text, 'asia_pacific'::text, 'latin_america'::text])))
hired_at DATE NOT NULL
VIEW: pg_buffercache
bufferid INTEGER
relfilenode OID
reltablespace OID
reldatabase OID
relforknumber SMALLINT
relblocknumber BIGINT
isdirty BOOLEAN
usagecount SMALLINT
pinning_backends INTEGER
VIEW: pg_buffercache_numa
bufferid INTEGER
os_page_num BIGINT
numa_node INTEGER
- Select some data from the sales_order table
PS C:Usersthoma> ghost sql sales_data "SELECT * FROM sales_orders LIMIT 10;"
sales_order_id │ customer_id │ sales_rep_id │ order_status │ ordered_at │ payment_method │ currency_code │ subtotal_amount │ tax_amount │ shipping_amount │ total_amount
────────────────┼─────────────┼──────────────┼──────────────┼───────────────────────────────┼────────────────┼───────────────┼─────────────────┼────────────┼─────────────────┼──────────────
1 │ 8 │ 9 │ completed │ 2024-05-21 05:08:34.332998+00 │ paypal │ USD │ 1796.88 │ 143.75 │ 0.00 │ 1940.63
2 │ 8 │ 9 │ completed │ 2024-05-21 05:08:34.332998+00 │ paypal │ USD │ 1622.63 │ 129.81 │ 0.00 │ 1752.44
3 │ 8 │ 9 │ completed │ 2024-05-21 05:08:34.332998+00 │ paypal │ USD │ 1867.25 │ 149.38 │ 0.00 │ 2016.63
4 │ 8 │ 9 │ completed │ 2024-05-21 05:08:34.332998+00 │ paypal │ USD │ 606.02 │ 48.48 │ 0.00 │ 654.50
5 │ 8 │ 9 │ completed │ 2024-05-21 05:08:34.332998+00 │ paypal │ USD │ 1450.11 │ 116.01 │ 0.00 │ 1566.12
6 │ 8 │ 9 │ completed │ 2024-05-21 05:08:34.332998+00 │ paypal │ USD │ 1227.85 │ 98.23 │ 0.00 │ 1326.08
7 │ 8 │ 9 │ completed │ 2024-05-21 05:08:34.332998+00 │ paypal │ USD │ 7086.55 │ 566.92 │ 0.00 │ 7653.47
8 │ 8 │ 9 │ completed │ 2024-05-21 05:08:34.332998+00 │ paypal │ USD │ 696.92 │ 55.75 │ 0.00 │ 752.67
9 │ 8 │ 9 │ completed │ 2024-05-21 05:08:34.332998+00 │ paypal │ USD │ 1063.73 │ 85.10 │ 0.00 │ 1148.83
10 │ 8 │ 9 │ completed │ 2024-05-21 05:08:34.332998+00 │ paypal │ USD │ 2489.06 │ 199.12 │ 0.00 │ 2688.18
(10 rows)
- Are you worried about your agent trashing your website? It's easy, just set your system to read only
PS C:Usersthoma> ghost config set read_only true
Now in Codex,
Codex> delete from sales_order_items where sales_order_item_id = 12
Worked for 5s
The DELETE was blocked, as expected:
ERROR: cannot execute DELETE in a read-only transaction (SQLSTATE 25006)
There are a number of useful commands you can use with the ghost CLI. For a complete list, type ghost – help on your command line.
But, as I said before, one of the main reasons for using Ghost is that we can manipulate databases using our AI agent. Let's see that in action.
Example 3 – Using Ghost MCP with our agent.
We already saw in example 1 how fast and easy it was to create a database using Codex, but let's show how easy it is to check the database using the agent.
- Add 500,000 records to the sales_order_items table
Codex> Add 500,000 dummy sales order records to the sales_order_items table
Added 500,000 dummy rows to sales_data.sales_order_items.
Final counts:
sales_orders 10,000
products 150
sales_order_items 530,000
I also recalculated the 10,000 order totals from the expanded line-item table.
Totals now:
Subtotal sum: 1,530,141,808.73
Tax sum: 124,981,128.08
Total sales: 1,639,903,892.69
Avg order value: 163,990.39
Confirmed the tables still have 0 indexes.
- Select some data from the sales_order_id table
Codex> Retrieve the sales_order_items record for
sales_order_items_id = 519932 and tell me how long it took to retrieve
Record found:
sales_order_item_id: 519932
sales_order_id: 9932
product_id: 15
quantity: 3
unit_price: 91.47
discount_amount: 0.00
line_total: 274.41
Timing:
Database execution time: 25.833 ms
Ghost MCP end-to-end time: 146.733 ms
Because there are no indexes, Postgres used a sequential scan and filtered through 529,999 rows to find the record.
- Now for some action
Codex> Fork 3 duplicate sales-data databases in parallel, naming them
sales_data-1, etc.... Using differering strategies in each fork, tune the
database so that the above query runs more quickly. Output your results in
an ascii table with database name, tuning strategy and database execution time.
I'll create three forks concurrently, then apply a different lookup strategy
to each fork: a plain btree index, a covering btree index, and a physically
clustered table plus index. Then I'll measure the same EXPLAIN ANALYZE
execution time on each.
Used Ghost
Only the source sales_data exists, so there are no fork-name collisions.
I'm launching the three fork/tune/measure workers now; each worker waits only
for its own fork before tuning it.
Worked for 2m 40s
Created and tuned three forked databases in parallel.
Baseline on untuned sales_data was previously 25.833 ms with a sequential scan.
+--------------+---------------------------------------------------------------+--------------------------+
| Database | Tuning strategy | Database execution time |
+--------------+---------------------------------------------------------------+--------------------------+
| sales_data-1 | Btree index on sales_order_item_id | 0.131 ms |
| sales_data-2 | Covering btree index including all projected query columns | 0.143 ms |
| sales_data-3 | Hash index on sales_order_item_id for equality lookup | 0.158 ms |
+--------------+---------------------------------------------------------------+--------------------------+
Best result: sales_data-1 with a standard btree index at 0.131 ms.
As you can see, the acceleration after applying the tuning was very important, as we would expect. Still, a +200x improvement in query execution time isn't too bad.
Example 4 – Creating a dashboard application for our website
This example shows how easy it is to integrate an application built with coding agents with a database built with coding agents.
Codex> Use your performance tuning findings to upgrade the sales_data
database. Then construct a dashboard using javascript and HTML for the
database. The dashboard should be dynamic. I want to be able to filter on data
and the dashboard should only show data that is inside the database.
I'll turn this into a small live dashboard app: an Express backend with
read-only Postgres queries, and a browser UI that fetches filtered data
from /api/dashboard. I'll keep the database URL in an environment file
rather than putting credentials in the HTML.
Ran 3 commands
The workspace already has pg; I'll add express and dotenv, then fetch a
read-only Ghost connection string for the app.
Ran 1 command, used Ghost
I'll avoid putting the database password into the HTML. The server will get a
read-only connection from the Ghost CLI at startup if DATABASE_URL is not
set, and all browser requests will go through parameterized API endpoints.
Ran & 'C:UsersthomaAppDataLocalProgramsGhostghost.exe' connect --read-only sales_data
The read-only Ghost connection string works and includes sslmode=require
plus Ghost's protected read-only option. I'm adding the backend and replacing
the HTML with a live API-driven dashboard.
The backend API is in place. I'm replacing the dashboard HTML with a
frontend that calls /api/options and /api/dashboard whenever filters change.
Created 1 file
The live frontend and backend are written. I'm starting the server now and
checking the API endpoints before I hand you the URL.
The server is now running on port 5101. I'm verifying the browser HTML and a
filtered API call against the live database.
Worked for 7m 28s
Done. The dashboard is now a live browser app backed by the Ghost sales_data
database.
Run/open it here:
What changed:
Added server.js with an Express API.
Updated sales_dashboard.html so filters call the API and redraw charts/tables.
Updated package.json with npm start.
The backend gets a Ghost read-only connection via ghost connect --read-only sales_data.
Browser JavaScript never sees the database password.
When I clicked on the HTML link, I got a dynamic dashboard that responded accurately to different filter values.

Summary
Hopefully, I've shown you enough of Ghost's website capabilities to whet your appetite for more of this useful product.
The ease and speed with which you can create, destroy, copy and manipulate Postgres databases using this tool is amazing.
My favorite feature of Ghost is how easy it is to use what-if conditions with real details. Need fresh copies of an existing database to try things out? It appeared slowly. You can cut and change indexes and primary keys, or create a physical view, just by explaining what you want in plain English using your favorite agent coding tools, with a fallback to the CLI if you really need it.
As I showed in my last example, because Ghost works so well with agent coding tools it is very easy for those agents to create code that integrates well with the Ghost database to produce useful applications such as dashboards and other types of SaaS applications.
Now, while you may want to pause before storing any production data in Ghost, it's a great platform for Proof of Concepts and general testing. Also, if you already have a database layout you're happy with using Ghost, replicating that to your production database should be straightforward.
For more information on using Ghost and full documentation, please visit the official home page below.



