Getting Started
Get started with CodeRx in minutes. Subscribe today and get instant access to our comprehensive drug database, weekly-updated data marts, and powerful integration tools.
Subscribe to CodeRx
Subscribe to CodeRx and get instant access to our comprehensive drug database. Explore weekly-updated data marts and integrate with your existing workflows.
Subscription Plan
We offer two annual pricing plans:
-
Basic: $5,500/year — Get access to comprehensive drug data marts with weekly updates, complete RxNorm mappings, rich drug knowledge graphs, and direct access to data marts hosted on AWS S3. Includes packages, drugs, ingredients, excipients, synonyms, and ATC classification.
-
Premium: $15,000/year — Everything in Basic, plus indications, CMS pricing (ASP / NDC to HCPCS mappings / NADAC pricing with 5+ years historical changes), CMS plans (Medicare Part D plan information including formularies, tiers, and pricing), NCPDP mappings, packaging data, and label images. Includes priority support from the CodeRx team.
Subscribe to CodeRx today. Get instant access to our comprehensive drug database with weekly updates, complete RxNorm mappings, and rich drug knowledge graphs. Annual billing available.
After Your Subscription
Once you've confirmed your subscription, you'll receive:
- AWS S3 Access Credentials - Access key ID and secret access key
- S3 Bucket Information - Bucket name and region details
- Connection Instructions - Step-by-step setup guide
- Welcome Email - Additional resources and documentation links
Note: Access credentials are typically provided within 24 hours of subscription confirmation.
Accessing Your Data
After receiving your credentials, you can access your data from AWS S3 using Python:
Authentication
You'll receive:
- Access Key ID: Your AWS access key
- Secret Access Key: Your AWS secret key
- S3 Bucket Info: The address of your AWS s3 bucket
Keep these credentials secure and never commit them to version control.
Querying Data with Python
Here's how to access and query CodeRx data using Python with s3fs:
Note: You'll need to install
s3fs,pandas, andpyarrowto work with parquet files. Install them with:pip install s3fs pandas pyarrow
import s3fs
import pandas as pd
# Create filesystem interface
fs = s3fs.S3FileSystem(
key='YOUR_ACCESS_KEY_ID',
secret='YOUR_SECRET_ACCESS_KEY'
)
# Read parquet file directly
df = pd.read_parquet(
'YOUR_S3_BUCKET/drugs/drugs.parquet',
filesystem=fs
)
# Filter and analyze
print(df.head())
print(f"Total drugs: {len(df)}")
Data Mart Structure
Your S3 bucket contains the following data marts, each organized in its own folder:
- drugs/ - Drug products with names, RXCUIs, dose forms
- packages/ - NDC packages with pricing and pack sizes
- ingredients/ - Active and inactive ingredients
- classes/ - Drug classification systems
- excipients/ - Inactive ingredients with safety data
- synonyms/ - Drug name synonyms and aliases
Each data mart folder contains:
- Latest snapshot:
{data_mart}/{data_mart}.csvor{data_mart}/{data_mart}.parquet(e.g.,drugs/drugs.parquet) - Dated snapshots:
{data_mart}/{data_mart}_YYYY-MM-DD.csvor{data_mart}/{data_mart}_YYYY-MM-DD.parquet(e.g.,drugs/drugs_2026-01-16.parquet)
Files are updated weekly, with new dated snapshots added while the latest file is always updated to point to the most recent data.
Next Steps
Support
Need help? Contact us at support@coderx.io or visit our Slack community.