← Back to Articles

How I Use Claude Code to Research Stocks (Complete Workflow)

By Charlie Chan|
How I Use Claude Code to Research Stocks (Complete Workflow)

How I Use Claude Code to Research Stocks (Complete Workflow)

I run a one-person investing operation that tracks 21 public companies, builds DCF valuations for each one, and updates them every earnings cycle. My AI research partner is Claude Code -- Anthropic's command-line tool that reads local files, runs code, and works directly with financial data. I configured it with a custom CLAUDE.md file that enforces SEC EDGAR as the primary data source, sets research standards, and maintains memory across sessions. This is my complete workflow.

Two years ago, building a DCF model from a 10-K filing took me an entire afternoon. Now I can go from SEC filing to completed valuation in under 30 minutes. Not because AI does the thinking for me -- it doesn't pick stocks or make predictions -- but because it handles the tedious extraction and calculation work while I focus on judgment calls about growth assumptions and competitive dynamics.

What Is Claude Code and Why Use It for Stock Research?

Claude Code is Anthropic's CLI tool that runs in your terminal and works directly with files on your computer. Unlike ChatGPT or web-based AI tools, it can read your local SEC filings, manipulate spreadsheets, execute code, and maintain context about your entire research library.

For investment research, this distinction matters. Web-based AI tools work from whatever they were trained on. Claude Code works with your actual data -- the 10-K you just downloaded, the JSON watchlist files you maintain, the DCF engine you built. It reads a 200-page filing directly, pulls specific numbers from XBRL data, and can cross-reference them against your existing models. No copy-pasting between windows. No uploading files to a chatbot. It operates inside your research environment as a capable partner, not a separate tool you consult.

How I Set Up Claude Code for Investment Research

The foundation is a CLAUDE.md configuration file that lives in my project root. When Claude Code starts a session, it reads this file and operates according to its rules. I named my configuration "Jarvis" -- it functions as an AI chief of staff with specific research standards baked in.

The critical rules: SEC EDGAR is the only acceptable primary data source. Never Yahoo Finance, never AI-generated estimates. Every financial figure must include a citation with source and page number. The format looks like: Revenue: $323.9B (Source: 4Q25 Earnings PDF, p.13). This isn't pedantic -- it's how you avoid the hallucination problem that makes most people distrust AI research.

A simplified version of the investing section of my CLAUDE.md:

## Research Standards
- Data sources (strict order): SEC EDGAR -> Company IR -> never Yahoo Finance
- Always cite source + page
- Gold standard file structure: match MSFT's research folder

## Key Locations
| What | Where |
|------|-------|
| Research | investing/research/individual/{TICKER}/ |
| Watchlist | content/watchlist/{TICKER}.json |

The configuration also includes a memory system. Jarvis reads daily memory files and long-term context at session start, so it remembers which companies I've researched, what my thesis is on each position, and what changed last quarter. Between sessions, nothing runs -- files on disk are the only continuity.

How I Research a New Company With Claude Code

When I add a new company to my watchlist, the workflow follows a specific sequence.

First, I pull financial data from SEC EDGAR. The SEC's XBRL API returns structured financial data for every public company. I built a normalize.ts module that maps the dozens of XBRL concept name variations (companies report revenue under different tags like Revenues, RevenueFromContractWithCustomerExcludingAssessedTax, or SalesRevenueNet) into a standardized schema. Claude Code calls this normalization layer to pull clean annual and quarterly financials going back 15 years.

Second, I read the latest 10-K or 10-Q filing. Claude Code reads the full document and extracts the key sections: business description, risk factors, management discussion, and financial statements. I ask it to identify the business model, revenue drivers, competitive moat, and anything that changed from the prior year.

Third, we analyze the numbers together. Revenue trends, margin trajectories, free cash flow conversion, capital allocation patterns. Claude Code can cross-reference the current filing against historical data it already has in the watchlist JSON, flagging any inflection points.

Fourth, we build the DCF assumptions. Based on the business analysis and financial trends, I set revenue growth rates, operating margins, and reinvestment assumptions for the next 5-10 years. Claude Code doesn't decide these -- I do -- but it pressure-tests them against historical patterns and flags anything that looks inconsistent.

How Claude Code Builds My DCF Valuation Models

Every company on my watchlist gets a full discounted cash flow valuation. The DCF engine I built for Carepital runs entirely client-side in TypeScript, but the assumptions feeding it come from my research workflow with Claude Code.

The calculation flow: Revenue projections are multiplied by EBIT margin assumptions to get operating income. After applying the tax rate, we add back depreciation and subtract capital expenditures and changes in net working capital to arrive at unlevered free cash flow. Each year's FCF is discounted back at the weighted average cost of capital.

For terminal value, I use both methods and compare: perpetuity growth (assuming FCF grows at a terminal rate forever) and exit multiple (applying an EV/EBITDA multiple to the final year). The two approaches should converge roughly -- if they don't, something in the assumptions is off.

Every valuation includes a sensitivity analysis. The model generates a grid showing implied share price across different WACC and terminal growth rate combinations. This is where the real insight lives -- not in a single price target, but in understanding how sensitive the valuation is to each assumption.

Claude Code helps me run bull, base, and bear scenarios quickly. Instead of manually adjusting a dozen variables, I describe the scenario ("bear case: revenue growth decelerates to 5%, margins compress 200bps from competition") and it translates that into the specific model inputs.

How I Analyze Earnings Calls and Quarterly Reports

Earnings season is where the system proves its value. When a company on my watchlist reports, the workflow is:

Pull the latest 10-Q from SEC EDGAR. Claude Code reads the full filing and compares actual results against my existing model assumptions. Did revenue come in above or below my growth estimate? How did margins track? Any unusual items in the cash flow statement?

I ask Claude Code to identify the three most important changes from the prior quarter. Not a summary of everything -- just what actually matters for the investment thesis. A 50-basis-point margin improvement might matter more than a revenue beat if the thesis depends on operating leverage.

Then we update the watchlist data. The JSON files for each company contain structured financials going back to 2010. After every earnings report, the new quarter gets added and the DCF assumptions get revisited. If results were materially different from expectations, I update the growth trajectory and margins. If they confirmed the existing thesis, I leave the model alone.

The key discipline: Claude Code flags changes but I make the decisions. It might say "operating margin came in at 38.6% vs your 40% assumption" but it doesn't unilaterally decide to revise the model. That judgment is mine.

How I Track 21 Companies Without Losing My Mind

Solo coverage of 21 companies works because of the system, not despite being one person.

Each company has a JSON data file with a standardized structure: ticker, sector, financial summary (annual data going back 15 years), DCF assumptions, quarterly data, and thesis notes. Microsoft's file, for example, contains every annual financial metric from FY2010 through the latest reported period -- revenue, margins, EPS, free cash flow, all normalized from XBRL.

The update cycle is tied to earnings. Every quarter, roughly 21 companies report over a 6-week window. Claude Code and I work through them systematically -- pull the filing, update the data, revise assumptions if needed, regenerate the valuation. The Carepital website itself serves as my research dashboard, with the DCF calculator running live on each company's page.

For tracking thesis changes between earnings, I use Obsidian as my knowledge management layer. The memory system in my Claude Code configuration connects to daily notes, so when I start a session, Jarvis already knows what happened last time -- which companies reported, what surprised me, what I want to dig into next.

My Complete AI Investing Tech Stack

| Tool | Purpose | Why I Use It | |------|---------|-------------| | Claude Code | AI research partner | Reads files, runs code, builds models locally | | SEC EDGAR | Financial data source | Free, authoritative, direct from companies | | Obsidian | Knowledge management | Notes, memory, research organization | | Carepital.com | Research dashboard | DCF calculator, watchlist, public analysis | | Next.js | Website framework | Fast, SEO-optimized, server-rendered | | TypeScript | DCF engine | Type-safe financial calculations, runs client-side |

The important thing about this stack: SEC EDGAR is the single source of truth. Everything flows from there. The XBRL normalization layer standardizes the data, Claude Code helps me analyze it, and the website presents it. No paid data subscriptions. No scraped third-party estimates.

What Results Has This System Produced?

I'll be honest about what this system does and doesn't deliver.

What it does: research that used to take hours now takes minutes. I can read a 200-page 10-K, extract the key financials, build a DCF model, and run scenario analysis in a single session. I cover 21 companies as a solo operator with the same rigor that used to require a team. Every number traces back to an SEC filing, not an AI hallucination or a Yahoo Finance scrape.

What it doesn't do: beat the market. AI doesn't give you alpha. It gives you leverage on time and attention. The edge, if there is one, comes from the systematic process -- every company gets the same analytical framework, every quarter gets reviewed, every thesis gets pressure-tested. The AI handles the grunt work so I can focus on the judgment.

The biggest win is coverage breadth. Before this system, I could realistically track maybe 5-8 companies in depth. Now I maintain detailed research on 21, updated quarterly, with full DCF valuations. That's the kind of advantage a one-person operation shouldn't be able to have.

Common Mistakes When Using AI for Stock Research

Trusting AI-generated financial data. This is the number one mistake. If Claude Code (or any AI) tells you a company's revenue was $50 billion, you verify it against the 10-K. My CLAUDE.md enforces citation requirements for exactly this reason. No source, no trust.

Using AI for stock picks. Asking "should I buy NVDA?" is the wrong question. AI has no edge in predicting prices. The right question is "what does NVDA's 10-K tell us about their data center margin trajectory?" Use it for analysis, not prophecy.

Skipping the configuration step. A generic AI session produces generic output. The CLAUDE.md configuration is what turns Claude Code from a chatbot into a research partner. Without research standards, citation requirements, and file structure conventions, you'll spend more time correcting output than saving.

Relying on AI for the final call. The model produces a number. The decision to buy, hold, or sell requires judgment about management quality, competitive dynamics, and risk tolerance that no AI should be making for you. AI is the analyst. You're the portfolio manager.

How to Get Started With Claude Code for Investing

The barrier to entry is lower than you'd think.

Step 1: Install Claude Code from Anthropic. It runs in your terminal -- Mac, Windows, or Linux.

Step 2: Create a CLAUDE.md file in your project directory. Start simple: set SEC EDGAR as your data source, require citations, define where research files go.

Step 3: Pick one company you already know well. If you own Apple, start with AAPL.

Step 4: Pull its latest 10-K from SEC EDGAR (edgar.sec.gov). Save it locally.

Step 5: Ask Claude Code to read the filing and extract key financials -- revenue, operating income, free cash flow. Verify every number against the original document.

Step 6: Build your first DCF model. Start with simple assumptions. The goal isn't perfection -- it's establishing the workflow.

Once you have one company working, the system scales naturally. Add a second company. Then a third. Before long, you have a watchlist, a quarterly update cycle, and a research process that compounds over time.

I've packaged my complete CLAUDE.md configuration, research workflow templates, and analysis prompts into the AI Investing Vault. You can download it free and skip the setup phase entirely.

Frequently Asked Questions

Do I need coding experience to use Claude Code for investing?

No. Claude Code runs in a terminal, but the interaction is conversational. You type questions in plain English, and it reads files and runs code on your behalf. The CLAUDE.md configuration is just a text file. If you can write a Word document, you can set this up. That said, knowing basic terminal commands will make you faster.

Is Claude Code better than ChatGPT for stock research?

For this specific workflow, yes. Claude Code operates on your local files -- it reads SEC filings on your hard drive, manipulates your data files, and maintains context about your research library. ChatGPT operates in a browser sandbox. If your research involves working with local data systematically, Claude Code is the better tool. For quick one-off questions, either works.

How much does Claude Code cost for investment research?

Claude Code requires a Claude Pro subscription ($20/month) or API access. The API charges per token, so a heavy research session analyzing a 10-K might cost $1-3 in API usage. For the amount of time it saves, the economics are straightforward. SEC EDGAR data is free. The total cost of this entire stack is under $25/month.

Can AI replace a financial analyst?

No. AI can do 80% of what a junior analyst does -- data extraction, financial modeling, report summarization. But the 20% that matters -- forming an investment thesis, assessing management quality, weighing risks that don't appear in filings -- requires human judgment. AI is the best research assistant you'll ever have. It's not a replacement for the analyst.

How accurate is AI-powered stock research?

The AI itself doesn't determine accuracy -- your data source does. When Claude Code pulls numbers from SEC EDGAR XBRL data, those numbers are as accurate as the company's official filings. When it generates a DCF valuation, the output is only as good as your assumptions. The AI handles calculation and extraction accurately. The judgment calls are on you.


Last updated: March 10, 2026. Charlie Chan is the founder of Carepital and has been building AI-powered investing systems since 2024. This content is for educational purposes and is not personalized financial advice.

claude code stock researchAI stock researchclaude code investingAI investment researchhow to use AI for stock research

Enjoyed this article?

Subscribe to get new insights delivered to your inbox.

Follow the journey

Portfolio updates, new company analyses, and lessons from building a one-person investing business.

No spam. Unsubscribe anytime.