PostgreSQL is the most capable open-source relational database available today. It handles relational data, JSON documents, full-text search, geospatial queries, and vector embeddings — often eliminating the need for multiple specialized databases in a single application. This is not an accident of design. PostgreSQL's extensibility model has allowed the community to build capabilities that rival proprietary databases at a fraction of the cost.
The challenge is not whether to use PostgreSQL. For most web applications, it is the right choice by default. The challenge is running it correctly in production. A misconfigured PostgreSQL instance will underperform, leak connections, create security vulnerabilities, and fail to recover cleanly from incidents.
This guide covers everything needed to run PostgreSQL in production: setup decisions, connection management, security hardening, backup strategy, performance tuning, and scaling patterns. Whether you are launching your first application or migrating an existing system, this is the reference you need.
Why PostgreSQL
PostgreSQL has become the default database for modern web applications. It handles relational data, JSON documents, full-text search, and vector embeddings in a single system. That breadth eliminates the operational complexity of running separate specialized stores for each data type.
The technical foundation is strong. PostgreSQL provides full ACID compliance — atomicity, consistency, isolation, and durability — without compromising query performance. Its MVCC (Multi-Version Concurrency Control) architecture allows readers and writers to operate concurrently without blocking each other, which matters significantly at scale.
Extensibility is what separates PostgreSQL from other open-source databases. The extension ecosystem adds capabilities that are not native to other relational databases:
- PostGIS for geospatial data and location queries
- pgvector for storing and querying machine learning embeddings
- pg_trgm for fuzzy string matching and similarity search
- TimescaleDB for time-series data with automatic partitioning
- Citus for horizontal sharding across multiple nodes
The adoption list is significant. Instagram, Spotify, Reddit, and Discord all run PostgreSQL at scale. These are not proof-of-concept deployments. They represent millions of queries per second against petabytes of data. The database handles it because the architecture was designed for it.
ACID compliance, community investment, and a 35-year development history translate to a database you can trust with critical data. PostgreSQL has not had a data loss bug in production use in over a decade. That track record matters more than any benchmark.
Managed vs. Self-Hosted PostgreSQL
The first decision in any postgresql production setup is whether to manage the database yourself or delegate that responsibility to a managed service. This decision has significant implications for team capacity, operational risk, and long-term cost.
| Factor | Managed (Out Plane, RDS, Supabase) | Self-Hosted |
|---|---|---|
| Setup Time | Minutes | Hours to days |
| Backups | Automatic with PITR | Manual configuration or cron jobs |
| Scaling | Auto or one-click | Manual vertical and horizontal |
| High Availability | Built-in with failover | Complex replica and failover setup |
| Security Patches | Automatic | Your responsibility, your timeline |
| Cost | Usage-based | Server cost plus engineer time |
| Monitoring | Built-in dashboards | DIY (Prometheus, Grafana, pgBadger) |
| Version Upgrades | Managed with minimal downtime | Manual pg_upgrade process |
| Connection Pooling | Often included | Install and configure PgBouncer |
Self-hosted PostgreSQL makes sense in specific circumstances: regulatory requirements that prohibit third-party data access, unusual performance requirements that demand custom kernel tuning, or infrastructure environments where managed services are not available.
For most teams, the operational overhead of self-hosted PostgreSQL is not a competitive advantage. It is an engineering tax. Database administration requires continuous attention — patching CVEs, managing WAL archiving, testing failover procedures, diagnosing bloat, tuning autovacuum. Every hour spent on these tasks is an hour not spent on product.
Managed PostgreSQL delegates these concerns to specialists. You get the connection string. You write queries. The platform handles everything else.
Setting Up Managed PostgreSQL on Out Plane
Out Plane provides managed PostgreSQL with versions 14 through 18. The default region is Nuremberg, Germany. Setup takes under five minutes. Here is the exact process.
Step 1: Navigate to Databases
Log in to console.outplane.com and select Databases from the left sidebar. This section manages all database instances across your organization.
Step 2: Create a Database Instance
Click Create Database. You will see a form with the following fields:
- Name: Enter a name between 3 and 63 characters. Use lowercase letters, numbers, and hyphens. Example:
my-app-productionorapi-db-prod. - Engine: Select PostgreSQL and choose your version. For new projects, use the latest stable version (17 or 18). If you are migrating from an existing instance, match your current major version.
- Region: Choose the region closest to your application servers. The default region is Nuremberg, Germany. Co-locating your database and application in the same region reduces latency and eliminates cross-region data transfer costs.
Step 3: Choose Compute Size
Out Plane offers a range of instance types for database sizing, from the entry-level op-20 (0.5 vCPU, 512MB RAM) through the op-94 (32 vCPU, 64GB RAM). For a new application with under 50 concurrent users, start with a smaller instance type. You can scale compute up without downtime as traffic grows.
Step 4: Provision and Connect
Click Create Database. Provisioning takes approximately 60 seconds. Once the status shows Ready, navigate to the database detail page. Copy the connection URL — it follows this format:
postgres://username:password@hostname:5432/database
Step 5: Configure Your Application
Add the connection URL to your application as an environment variable named DATABASE_URL. Most database libraries and ORMs recognize this convention automatically.
For Django with dj-database-url:
import dj_database_url
DATABASES = {'default': dj_database_url.parse(os.environ['DATABASE_URL'])}For Node.js with pg:
const { Pool } = require('pg');
const pool = new Pool({ connectionString: process.env.DATABASE_URL });For Go with pgx:
conn, err := pgx.Connect(context.Background(), os.Getenv("DATABASE_URL"))The application is connected. No additional configuration is required to reach a running, production-ready database.
Connection Pooling
Database connections are expensive. Each PostgreSQL connection spawns a new server process, consuming memory and CPU. A database configured for 100 connections allocates server resources for all 100 regardless of activity level. Applications with connection pooling misconfigurations are among the most common sources of postgresql production incidents.
The problem becomes acute with stateless application servers. A web application running 10 instances might open 10 connections per instance, for 100 total connections. Scale to 50 instances and you hit connection limits — new requests fail even though the database is handling queries with only 20% CPU utilization.
Connection pooling solves this by maintaining a smaller pool of real PostgreSQL connections and multiplexing application requests through them. Instead of each application thread holding a dedicated connection, a pool of 20 database connections handles hundreds of concurrent application threads.
PgBouncer
PgBouncer is the standard PostgreSQL connection pooler. It operates in three modes:
- Session pooling: One server connection per client session. Minimal behavior change, modest benefit.
- Transaction pooling: Server connection held only for the duration of a transaction. Best fit for stateless web applications.
- Statement pooling: Server connection held for a single statement. Highest connection reuse, but incompatible with multi-statement transactions.
Transaction pooling is the right choice for most web applications. It allows a pool of 20 database connections to serve hundreds of application threads, each completing transactions quickly.
Out Plane includes built-in connection pooling in the managed PostgreSQL offering. The pooling endpoint is available alongside the direct connection URL on your database detail page. For most applications, use the pooling endpoint in production.
Configuration Recommendations
Application-level pool settings:
For a web application on a single server:
- Minimum pool size: 2
- Maximum pool size: 10
- Idle timeout: 600 seconds
- Connection timeout: 30 seconds
For a horizontally scaled application (multiple instances):
- Maximum pool size per instance: 5 to 8
- Total connections should not exceed 80% of the database's
max_connectionssetting
ORMs and connection pools:
Most ORMs create their own connection pool. Verify pool settings explicitly rather than relying on defaults:
# SQLAlchemy - explicit pool configuration
engine = create_engine(
DATABASE_URL,
pool_size=5,
max_overflow=10,
pool_timeout=30,
pool_recycle=1800
)// node-postgres - explicit pool configuration
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 10,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000,
});Security Hardening
A production PostgreSQL instance requires explicit security configuration. The defaults are not sufficient for internet-facing applications.
Network Security
IP Allowlisting
Restrict database access to known IP addresses. In the Out Plane console, navigate to your database's Network tab and add allowlist entries for your application server IP ranges. If your application servers are on Out Plane, use the private network range — traffic between services on the same platform stays on private networking and never traverses the public internet.
A database with no IP allowlist is accessible to any host that can reach it on port 5432. This is not an acceptable configuration for production data.
Private Networking
Deploy your application and database in the same region and network. Private networking between services eliminates public routing, reduces latency, and removes the database port from public exposure entirely. On Out Plane, services within the same project communicate over internal networking automatically when configured in the same region.
SSL/TLS for Connections
All connections to managed PostgreSQL on Out Plane use SSL by default. Verify your connection string includes the SSL parameter if your client requires explicit configuration:
postgres://user:pass@host:5432/db?sslmode=require
Never connect to production databases with sslmode=disable. Transit encryption is non-negotiable for any data of consequence.
Authentication
Least Privilege Roles
Create separate PostgreSQL roles for different application components. A web application does not need the same permissions as a migration runner or analytics service.
-- Application user: read/write on application tables only
CREATE ROLE app_user WITH LOGIN PASSWORD 'secure_password';
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO app_user;
-- Read-only user: for analytics, reporting, or read replicas
CREATE ROLE app_readonly WITH LOGIN PASSWORD 'secure_password';
GRANT SELECT ON ALL TABLES IN SCHEMA public TO app_readonly;
-- Migration user: schema modification rights
CREATE ROLE app_migrate WITH LOGIN PASSWORD 'secure_password';
GRANT ALL PRIVILEGES ON SCHEMA public TO app_migrate;The application server uses app_user. Schema migrations run as app_migrate. Analytics queries run as app_readonly. A compromised application credential cannot execute DROP TABLE.
Out Plane managed PostgreSQL supports multiple roles and multiple databases per instance. Create separate databases per service when running a multi-service architecture — this provides schema isolation without requiring separate database instances.
Rotate Credentials Regularly
Database credentials are long-lived secrets. Implement credential rotation as a standard practice, not an emergency response. Managed platforms make this straightforward: generate new credentials, update the environment variable in your application, and the old credentials become inactive.
Data Protection
Encryption at Rest
All data in managed PostgreSQL on Out Plane is encrypted at rest using AES-256. This covers data files, WAL segments, and backups.
Audit Logging
Enable PostgreSQL's activity logging for security-relevant events. The log_connections, log_disconnections, and log_statement settings create an audit trail. Out Plane's event logging captures database-level activity through the console, providing visibility into connection patterns and unusual query volumes without requiring custom log aggregation setup.
Backup Strategies
Backup failures are invisible until they matter. By then, it is too late.
Automated Backups on Managed Platforms
Out Plane takes automated daily backups of all managed PostgreSQL instances. Backups are stored in durable object storage with geographic replication. The retention window ensures you can recover to any point within the retention period.
Automated backups handle the most common recovery scenarios: accidental data deletion, application bugs that corrupt data, and infrastructure incidents. They require no configuration and no ongoing maintenance.
Point-in-Time Recovery
Point-in-time recovery (PITR) reconstructs the database state to any specific moment by replaying WAL (Write-Ahead Log) segments on top of a base backup. If a deployment at 14:00 introduced a bug that deleted user records, PITR allows recovery to 13:59 — a one-minute data loss window rather than a 24-hour window.
PITR is the difference between a recoverable incident and a catastrophic one. Managed platforms including Out Plane provide PITR as part of the standard database service.
Logical Backups with pg_dump
For application-level backups, schema exports, or database migrations, pg_dump creates portable logical backups:
# Full database dump in custom format (compressed, supports selective restore)
pg_dump --format=custom --no-acl --no-owner \
"$DATABASE_URL" > backup_$(date +%Y%m%d_%H%M%S).dump
# Restore from custom format dump
pg_restore --no-acl --no-owner \
--dbname="$TARGET_DATABASE_URL" backup.dumpLogical backups are useful for migrating between instances, copying production data to staging, and creating portable snapshots before major schema changes.
Test Your Backups
A backup that has never been restored is not a backup. It is an untested file that might contain a valid database.
Establish a regular restore testing cadence. Monthly is a minimum; weekly is better. The test should go beyond "the file exists" and verify actual data integrity after restoration. Many teams discover their backup process has been silently broken for months only when they attempt a real recovery.
Performance Tuning
Essential Configuration Parameters
Default PostgreSQL configuration is conservative. It is designed to run on limited hardware. Production instances require explicit tuning.
shared_buffers
PostgreSQL's primary cache. Set to 25% of available RAM.
shared_buffers = 2GB # for an 8GB instance
effective_cache_size
An estimate of the total memory available for caching, including the OS page cache. Used by the query planner to estimate whether an index will fit in memory. Set to 75% of available RAM.
effective_cache_size = 6GB # for an 8GB instance
work_mem
Memory allocated per sort operation and hash table. Applies per operation, not per connection — a complex query can use work_mem multiple times. Set conservatively for shared instances.
work_mem = 64MB # adjust based on query patterns and available RAM
maintenance_work_mem
Memory used for maintenance operations: VACUUM, CREATE INDEX, ALTER TABLE. Higher values speed up these operations significantly.
maintenance_work_mem = 512MB
wal_level
Required for replication and point-in-time recovery. Set to replica or logical depending on replication needs.
wal_level = replica
Query Optimization
EXPLAIN ANALYZE
Every slow query investigation starts here. EXPLAIN ANALYZE executes the query and returns the actual execution plan with timing data.
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT u.id, u.email, COUNT(o.id) as order_count
FROM users u
LEFT JOIN orders o ON o.user_id = u.id
WHERE u.created_at > NOW() - INTERVAL '30 days'
GROUP BY u.id, u.email;The output reveals sequential scans that should be index scans, nested loops that should be hash joins, and row count estimation errors that cause the planner to choose suboptimal plans.
Index Strategies
PostgreSQL supports multiple index types. Choosing the wrong one is a common performance mistake.
- B-tree: The default. Use for equality and range queries on scalar values. Covers 90% of use cases.
- GIN (Generalized Inverted Index): Use for full-text search, JSONB containment queries, and array operations.
- GiST: Use for geometric and geographic data, full-text search with ranking, and custom data types.
- BRIN (Block Range Index): Use for naturally ordered large datasets (timestamps, sequential IDs). Very small index size.
-- Standard B-tree index for equality and range lookups
CREATE INDEX idx_orders_user_created ON orders (user_id, created_at DESC);
-- GIN index for JSONB containment queries
CREATE INDEX idx_events_metadata ON events USING GIN (metadata);
-- GIN index for full-text search
CREATE INDEX idx_articles_search ON articles USING GIN (to_tsvector('english', title || ' ' || content));Avoiding N+1 Queries
N+1 queries are the most common source of application-level performance problems. They occur when code fetches a list of records and then executes a separate query for each record.
# N+1: 1 query for users + N queries for orders
users = User.objects.all()
for user in users:
print(user.orders.count()) # executes a query per user
# Correct: 1 query with aggregation
users = User.objects.annotate(order_count=Count('orders'))
for user in users:
print(user.order_count)Query logs reveal N+1 patterns: many identical queries with different parameter values executing within milliseconds of each other.
Monitoring Performance
pg_stat_statements
The pg_stat_statements extension tracks query execution statistics — total calls, mean execution time, total rows, cache hit ratio — across all queries. Enable it and query it regularly to identify slow or frequently executed queries.
-- Top 10 slowest queries by mean execution time
SELECT query, calls, mean_exec_time, total_exec_time, rows
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;Connection Count Monitoring
Track active connections against the max_connections limit. Sustained connection counts above 80% of the limit indicate connection pool misconfiguration.
SELECT count(*), state
FROM pg_stat_activity
GROUP BY state;Lock Contention
Long-held locks block other queries and degrade throughput. The pg_locks and pg_stat_activity views expose lock waiters.
SELECT blocked.pid, blocked.query, blocking.pid AS blocking_pid, blocking.query AS blocking_query
FROM pg_stat_activity blocked
JOIN pg_stat_activity blocking ON blocking.pid = ANY(pg_blocking_pids(blocked.pid))
WHERE blocked.wait_event_type = 'Lock';Common PostgreSQL Patterns for Web Applications
JSON and JSONB
PostgreSQL's JSONB type stores JSON as a parsed binary representation with full indexing support. It is not a replacement for structured columns, but it solves real problems.
Use structured columns for:
- Data with known, stable schema that participates in joins or aggregations
- Columns used in WHERE clauses on equality or range conditions
- Data with strict type requirements
Use JSONB for:
- Flexible metadata attached to entities (configuration, attributes, event payloads)
- Data with variable or user-defined structure
- Semi-structured log data you need to query occasionally
-- Efficient JSONB containment query
SELECT * FROM events WHERE metadata @> '{"source": "mobile"}';
-- Index for fast JSONB containment lookups
CREATE INDEX idx_events_metadata_gin ON events USING GIN (metadata);
-- Query a specific JSONB key
SELECT * FROM events WHERE metadata->>'user_id' = '12345';Full-Text Search
PostgreSQL's built-in full-text search handles most application search requirements without requiring a dedicated search engine.
-- Add a tsvector column for search
ALTER TABLE articles ADD COLUMN search_vector tsvector
GENERATED ALWAYS AS (
to_tsvector('english', coalesce(title, '') || ' ' || coalesce(content, ''))
) STORED;
-- Index the search vector
CREATE INDEX idx_articles_search ON articles USING GIN (search_vector);
-- Full-text search query
SELECT id, title, ts_rank(search_vector, query) AS rank
FROM articles, to_tsquery('english', 'postgresql & performance') query
WHERE search_vector @@ query
ORDER BY rank DESC;PostgreSQL full-text search is appropriate for most applications. The threshold for moving to a dedicated system like Elasticsearch is fuzzy matching, faceted search, or real-time indexing requirements. Reach that threshold first rather than adding infrastructure complexity from day one.
Vector Embeddings with pgvector
The pgvector extension adds vector storage and similarity search to PostgreSQL. It supports AI applications that store and query machine learning embeddings — semantic search, recommendation systems, and retrieval-augmented generation (RAG).
-- Enable pgvector
CREATE EXTENSION IF NOT EXISTS vector;
-- Table with embedding column
CREATE TABLE documents (
id BIGSERIAL PRIMARY KEY,
content TEXT,
embedding vector(1536) -- OpenAI text-embedding-3-small dimensions
);
-- Index for approximate nearest neighbor search
CREATE INDEX ON documents USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100);
-- Find 10 most similar documents to a query embedding
SELECT id, content, embedding <=> $1 AS distance
FROM documents
ORDER BY embedding <=> $1
LIMIT 10;pgvector is sufficient for most AI applications. When vector datasets reach tens of millions of entries with strict latency requirements below 50ms, dedicated vector databases (Qdrant, Weaviate, Pinecone) may offer better performance characteristics. The majority of applications never reach that threshold.
Scaling PostgreSQL
Vertical Scaling
The first scaling lever is always vertical. More CPU handles more concurrent queries. More RAM increases cache hit ratio and reduces disk I/O. More I/O throughput allows faster WAL writes and vacuum operations.
On Out Plane, compute scaling happens without downtime. Adjust your instance type through the console and the change applies with a brief restart. Move to a larger instance type when query volume increases — this is faster and simpler than any other scaling intervention.
Vertical scaling handles most growth. The majority of applications can serve millions of users on a single well-configured PostgreSQL instance with appropriate indexing and query optimization.
Read Replicas
For read-heavy workloads, read replicas distribute query load across multiple database servers. The primary instance handles all writes and propagates changes to replicas via streaming replication. Read queries route to replicas, reducing load on the primary.
Read replicas introduce replication lag — replicas are slightly behind the primary, typically by milliseconds but potentially by seconds under write load. Applications must tolerate reading slightly stale data on the replica path. User-facing queries that require immediately consistent data should go to the primary.
Table Partitioning
Partitioning splits a large table into smaller physical segments while maintaining a single logical table interface. It accelerates queries that filter on the partition key and simplifies data lifecycle management (dropping old partitions instead of deleting rows).
-- Range partitioning by date on a high-volume events table
CREATE TABLE events (
id BIGSERIAL,
user_id BIGINT NOT NULL,
event_type TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
) PARTITION BY RANGE (created_at);
-- Create monthly partitions
CREATE TABLE events_2026_01 PARTITION OF events
FOR VALUES FROM ('2026-01-01') TO ('2026-02-01');
CREATE TABLE events_2026_02 PARTITION OF events
FOR VALUES FROM ('2026-02-01') TO ('2026-03-01');Partitioning becomes valuable when a table exceeds 100 million rows and queries consistently filter on the partition key. Apply it when query performance degrades, not preemptively.
Sharding
Horizontal sharding distributes data across multiple independent PostgreSQL instances. It is the most complex scaling approach and the one most prematurely applied.
Most applications that believe they need sharding actually need better indexing, query optimization, vertical scaling, or read replicas. Sharding introduces cross-shard query complexity, makes joins between sharded tables difficult, and complicates schema migrations.
Reach for sharding only after exhausting vertical scaling, read replicas, and application-level caching. Teams running hundreds of millions of users have reached this point. Most teams never will.
Migration Guide
Migrating from MySQL to PostgreSQL
MySQL and PostgreSQL share SQL syntax but differ in important ways. Key differences to address during migration:
- MySQL's
AUTO_INCREMENTbecomes PostgreSQL'sSERIALorGENERATED ALWAYS AS IDENTITY - MySQL's
TINYINT(1)for boolean becomes PostgreSQL'sBOOLEAN - String comparison is case-sensitive in PostgreSQL by default
- MySQL's
DATETIMEmaps to PostgreSQL'sTIMESTAMP - Full-text search syntax differs significantly
The pgloader tool automates MySQL-to-PostgreSQL migrations, handling type conversions and data transfer:
pgloader mysql://user:pass@mysql-host/dbname \
postgres://user:pass@pg-host/dbnameTest application behavior thoroughly after migration. Query differences surface at runtime through missing index hits, case sensitivity bugs, and type coercion differences.
Migrating from MongoDB to PostgreSQL
Document-to-relational migrations require schema design work that no tool can automate. The migration process follows three phases:
- Schema design: Map document structures to normalized relational tables. Embedded documents become related tables. Arrays become junction tables or JSONB columns depending on query patterns.
- Data export: Export MongoDB collections to JSON with
mongoexport. - Data import: Transform and load JSON data into PostgreSQL using
\copy,pg_loader, or a custom ETL script.
Many MongoDB-to-PostgreSQL migrations discover that the document schema was under-designed. The relational migration forces schema decisions that improve data integrity and query performance.
pg_dump and pg_restore
For migrations between PostgreSQL instances:
# Export from source
pg_dump --format=custom --no-acl --no-owner \
"postgres://user:pass@source-host/dbname" > database.dump
# Import to target
pg_restore --no-acl --no-owner \
--dbname="postgres://user:pass@target-host/dbname" database.dumpFor large databases, use parallel restore to speed up the import:
pg_restore --jobs=4 --no-acl --no-owner \
--dbname="postgres://user:pass@target-host/dbname" database.dumpZero-Downtime Migration Strategies
Schema changes that require table rewrites (adding non-nullable columns, changing column types) cause lock contention on active tables. For applications that cannot tolerate downtime, use expand-contract migrations:
- Expand: Add the new column as nullable. Deploy application code that writes to both old and new columns.
- Backfill: Populate the new column for existing rows in batches to avoid locking.
- Validate: Verify the new column has correct data for all rows.
- Contract: Remove the old column. Deploy application code that reads only from the new column.
This pattern works for column renaming, type changes, and normalization changes. Tools like squawk analyze migration files and flag potentially dangerous operations before they reach production.
Summary
PostgreSQL covers the database requirements of the vast majority of web applications. Its combination of relational integrity, JSON support, full-text search, and vector capabilities makes it the most versatile open-source database available. Building on a solid foundation of postgres production configuration — connection pooling, security hardening, automated backups, and appropriate performance tuning — means the database will handle growth without incident.
The operational overhead of self-hosted PostgreSQL is substantial and ongoing. Managed PostgreSQL eliminates that overhead. You get the full power of PostgreSQL without the time investment of managing it yourself.
The practical approach is: start with managed PostgreSQL, optimize configuration as traffic grows, and add scaling mechanisms when actual usage patterns justify them. Most applications will run confidently for years on a single managed instance with proper indexing and connection pooling in place.
Ready to provision a production PostgreSQL database in minutes? Visit console.outplane.com to create your first managed PostgreSQL instance. If you are building a new application stack, the SaaS tech stack guide covers how PostgreSQL fits alongside the other services you will need. For a direct comparison of PostgreSQL against document databases, see PostgreSQL vs. MongoDB. If you are deploying a Django application with PostgreSQL, the Django deployment guide walks through the full setup from code to production.