What AI Can Actually Build Financial Analytics Dashboard

Building an AI-Powered Analytics Dashboard — A Financial Portfolio Case Study

Most dashboards show you data. This one is built to surface decisions. Here is how we designed and deployed a live AI analytics dashboard using a real investment portfolio as the worked example — and what the same pattern looks like applied to any structured dataset.

The Concept: Dashboards That Surface Decisions

One of the projects we set out to build at MarrSynth was an AI-powered analytics dashboard — not a reporting tool, but something that applies predictive modeling and AI-generated signals to whatever data you feed it, then presents the output as actionable decisions rather than raw numbers.

The design principle is straightforward: most dashboards are built around metrics. A well-designed analytics dashboard should be built around questions. Not "what is the current value?" but "what should I do about it?" That distinction shapes everything — what to surface, how to frame it, what context to provide, and where to place the decision prompt.

To make this concrete rather than theoretical, we needed a real dataset with real structure and real stakes. We chose a financial portfolio: a 197-ticker investment universe built around a structured 2026 technology thesis, assessed by five AI models, and stress-tested across 21 economic scenarios. It is a dataset rich enough to demonstrate every dimension of what an AI-powered analytics dashboard can do.

The result is the AI Financial Analytics Dashboard — a live web application that reads from a Google Sheet, updates automatically, and surfaces decision intelligence alongside the raw data. This article walks through how it was built, what the underlying data represents, and how the same pattern applies beyond finance.

Two follow-up articles will go deeper on the upstream work: how the 2026 investment thesis was constructed using AI, and how the 21 stress scenarios were designed and run to pressure-test the resulting portfolio. This article focuses on the dashboard itself.

Understanding the Data: Thesis, FitScores, and Stress Tests

Before getting into the dashboard architecture, the underlying data needs context — because without it, the numbers on screen are just numbers. The dashboard makes sense once you understand what it is actually measuring and why.

The 2026 Investment Thesis

The data originates from a structured investment research project conducted across late 2025 and early 2026. The starting point was a 10-theme technology thesis: a set of macro-level convictions about where durable, compounding value would be created over the next several years.

The ten themes were: mass deployment of AI agents, advanced semiconductors beyond GPUs, autonomous systems and robotics, power grid infrastructure and electrification, energy storage at scale, synthetic biology and precision biomanufacturing, space infrastructure and commercial launch, quantum-adjacent technologies, financial tokenization and digital assets, and defense and dual-use technology.

Each theme represents a structural shift rather than a single bet — the kind of multi-year transition that tends to lift multiple companies across an entire value chain rather than producing one winner. The thesis was deliberately cross-sector, recognizing that AI infrastructure spending, for example, shows up simultaneously in semiconductor revenues, utility demand, industrial equipment orders, and data center real estate.

How the thesis was built — the research process, the AI prompting methodology, the refinement across multiple models — is the subject of an upcoming article. What matters for understanding the dashboard is that the thesis defined the evaluation criteria for every ticker in the universe.

What a FitScore Is

With a thesis defined, the next challenge was systematic evaluation: how well does any given company actually align with these themes? The answer was the FitScore — a 0-to-1 rating of thesis alignment assigned to each ticker in the universe.

Rather than relying on a single model's assessment, FitScores were generated independently by five AI systems: Gemini, Grok, Claude, Perplexity, and ChatGPT. Each model received the same thesis framework and evaluated each company against it, producing a score and a written rationale. The five scores were then averaged into a composite FS_Mean.

Using multiple models serves the same purpose as using multiple analysts: each has different training data, different reasoning patterns, and different blind spots. Gemini and Perplexity scored consistently higher on average (0.884 and 0.866 mean respectively), while Claude scored most conservatively (0.742 mean). The spread between models is itself informative — high consensus across all five suggests a company sits squarely within a theme; high variance suggests it sits at the edge of one.

A composite score above roughly 0.85 indicated strong thesis alignment and served as the primary inclusion criterion. Of 197 tickers evaluated, 94 cleared this bar and received a non-zero portfolio allocation. The 103 excluded tickers are not necessarily poor investments — they simply do not align well enough with the specific thesis themes to warrant inclusion in a thesis-driven portfolio.

Analyst Targets and Projected Returns

Alongside the AI FitScores, each ticker was evaluated against Wall Street analyst price targets — drawing on consensus low, midpoint, and high estimates — as well as independent price targets generated by each of the five AI models. These were blended into a projected 1-year return for each position, calculated from a base date of December 31, 2025.

These projections are systematic estimates based on thesis alignment and price target consensus, intended to inform relative positioning decisions within a thesis-driven portfolio. They are not recommendations or predictions of absolute outcomes.

The 21 Stress Scenarios

The final layer of the data is the stress test results. Rather than evaluating the portfolio only under benign assumptions, 21 distinct economic scenarios were constructed — spanning a range from a "Gridlock Base Case" (the modal outcome, assigned 55% probability) through "Productivity Surge," "CapEx Reckoning," "Stagflationary Shock," and "Multidimensional Polarization," among others.

Each scenario specified different assumptions about interest rates, inflation, technology adoption pace, trade policy, and regulatory environment. Each ticker's return and FitScore were computed under each scenario, then combined into a weighted average using the scenario probabilities. These are the Stress Return_WeightedAvg and Stress FitScore_WeightedAvg columns visible in the dashboard screener.

The stress test process — how the 21 scenarios were designed to span the realistic outcome space, how AI was used to assign and refine the probabilities, and how the weighted results changed the portfolio construction decisions that followed — is the subject of a second planned follow-up article.

Building the Dashboard

With the data context established, the dashboard build itself was a three-phase process: build a static version to validate the design, correct a framing error that crept into the initial version, then make it live by connecting it to Google Sheets.

Phase 1: Static Prototype

The first step was uploading the FitScore spreadsheet to Claude and asking it to analyze the data and build a financial analytics dashboard. Claude read the file using pandas, extracted the key metrics, and produced a self-contained HTML file with all 94 included positions embedded as a JavaScript data array. Charts were built with Chart.js. The initial dashboard had four tabs — portfolio overview, position screener, AI model comparison, and stress scenarios — plus a decision intelligence panel and an interactive scenario simulator.

The design goal throughout was density without clutter: every element should answer a question a real analyst would actually ask, not just display a number. The decision intelligence panel is the most explicit expression of this — three cards, each stating a specific condition, quantifying the impact, showing a confidence score, and offering a prompt to model the action.

Phase 2: Getting the Framing Right

Before moving to live data, an important framing error needed correcting. The initial dashboard reported 23 confirmed 1-year target hits with a "100% hit rate." Technically accurate — every confirmed outcome was a hit — but deeply misleading without the date context.

The 1-year measurement window opened on December 31, 2025. As of March 22, 2026, only 81 days had elapsed — 22% of the full year. The correct statement was that 23 of 94 included positions had already hit their full annual price target in just 22% of the measurement window, with zero confirmed misses and 71 positions still tracking. That framing is actually more remarkable than "100% hit rate": these positions crossed the finish line roughly four months early. The analysis also revealed that AMD had exceeded its target entirely, reaching 141% of its projected annual return in under three months.

Every surface referencing target outcomes was updated: the hero stats, KPI cards, outcome chart, hits table column headers, and decision intelligence panel. This kind of framing work — ensuring numbers mean what readers will infer they mean — is human judgment layered on top of correct computation. AI produces accurate numbers quickly; it takes a human reviewer to catch when accurate numbers tell a misleading story.

Phase 3: Live Data via Google Sheets

The static version was a snapshot. Making it live required replacing the hardcoded data array with a real-time feed. Four options were evaluated — manual HTML refresh, Flask reading the xlsx file on each load, a live price feed for real-time tracking, and Google Sheets as the data layer. Google Sheets won on practicality: it is the same environment where the spreadsheet already lives, it is accessible from any device, and its "Publish to web as CSV" feature requires no authentication, no API keys, and no Google Cloud project setup.

The architecture is three components. The Google Sheet is published via File → Share → Publish to web, producing a stable URL that returns fresh CSV on every request. A Flask blueprint exposes a /api/fitscore-data endpoint: it fetches the CSV, parses it, filters to included positions, normalizes numeric fields, and returns clean JSON — with a five-minute server-side cache to avoid excessive external calls. The dashboard HTML fires a fetch('/api/fitscore-data') call on load, populates all charts and tables from the response, and falls back silently to a baked-in snapshot if the API is unreachable.

The Google Sheet URL is stored as a Render environment variable rather than hardcoded in source. The resulting update workflow is: edit the Sheet, wait up to five minutes, visit the site. No code change. No redeploy. No manual file updates.

What the Dashboard Shows

The live dashboard organizes the data into four tabs, with a persistent decision intelligence panel and scenario simulator always visible.

The Overview tab provides a portfolio-level picture: FitScore distribution across all 197 tickers, projected return distribution across the 94 included positions, sector allocation by weight and average FitScore, a FitScore vs. return scatter plot with bubble size representing allocation weight, and the target outcome breakdown. The goal is a complete orientation in under thirty seconds.

The Position Screener is the most interactive element — a live-filtered table across all 94 positions with search by ticker or sector, sort by FitScore, projected return, allocation, or stress return, and sector filter buttons. Because the table renders from the live API data, it automatically reflects any updates made to the Sheet.

The AI Models tab compares the five models' scoring behavior — mean, minimum, and maximum across the full universe — and shows the confirmed early target hits with their projected versus actual returns. This is where the model validation story lives: which AI systems scored highest, where they diverged, and how early results map back to those scores.

The Stress and Scenarios tab shows stress-weighted average returns by sector from the 21-scenario stress test, a thesis scenario probability chart, and the interactive macro simulator. The simulator uses actual sector weights from the portfolio to translate three macro inputs — tech sector move, rate cut or hike, energy sector move — into estimated portfolio impact, recomputed in real time as the sliders are adjusted.

The Decision Intelligence panel is the piece most directly tied to the design goal. It states what to do about the data, not just what the data says: NVDA concentration approaching the single-position limit, AVGO underweight relative to its FitScore, and the early target validation signal. Each card quantifies the condition, shows a confidence score, and surfaces the next action.

Key Lessons from the Build

The data you already have is usually enough to start. The entire dashboard was built from one spreadsheet that already existed. No new data collection, no API subscriptions, no database setup. The value was in organizing and surfacing what was already there.

Accurate numbers can still tell the wrong story. The "100% hit rate" finding was technically correct and completely misleading. Catching it required thinking about the date context, not just reading the data. AI computes accurately and quickly; it takes deliberate human review to catch when correct numbers carry incorrect implications.

The publish-as-CSV pattern is underused. Google Sheets' ability to publish a tab as a live CSV endpoint is one of the most practical low-infrastructure integrations available. No authentication overhead, no SDK, no rate limits to manage carefully — just a URL that returns fresh data. Combined with server-side caching in Flask, it is a surprisingly capable data layer for dashboards that do not need sub-minute freshness.

Graceful degradation is worth building in from the start. The silent fallback to a baked-in snapshot when the API fails was a small addition that makes the dashboard meaningfully more robust. The pattern — try live data, fall back to cached snapshot — is worth applying to any dashboard that depends on an external data source.

AI is best used as builder, not as sole judge. Claude designed the layout, wrote the Flask route, configured the charts, and handled the CSV parser edge cases correctly and quickly. The judgment calls — which metrics to surface, how to frame the target hit situation, what the decision cards should actually say — remained human decisions. That division of labor is worth being intentional about.

Paths Forward

The current dashboard is a read-only view of a single spreadsheet. The most impactful extensions connect it to additional data sources or close the loop between observation and action.

Live Price Feed for Automatic Target Tracking

The most direct upgrade is automatic price-based target tracking. Each position's implied target price is entry_price × (1 + projected_return). Fetching current prices from a free API — Yahoo Finance via the yfinance Python library, or Alpaca's free market data tier — would compute actual returns dynamically and update target hit status without any manual Sheet edits. The 1-Yr Target Met? column becomes a computed output rather than a manually maintained field.

Macroeconomic Data Overlay

The scenario simulator currently uses fixed macro sensitivity assumptions. Connecting to live macro data — Federal Reserve rates and yield curve data from FRED (the St. Louis Fed's free public API), VIX from a market data provider — would let the stress scenarios update automatically as the macro environment shifts. A dashboard showing portfolio sensitivity to the current actual yield curve is substantially more useful than one showing sensitivity to a fixed hypothetical.

Earnings Calendar Integration

Many high-FitScore positions carry meaningful earnings volatility. Connecting to an earnings calendar API — Alpha Vantage's free earnings endpoint, for example — would let the decision intelligence panel flag upcoming events for high-allocation positions before they occur rather than after.

Multiple Theses and Portfolios

The current architecture assumes one Sheet and one thesis. The Flask route is easily extended to support multiple Sheet URLs — one per thesis, portfolio, or watchlist. A dropdown in the nav switches data sources while the visualization layer stays identical. This is the pattern for scaling from one portfolio to a family of portfolios without duplicating infrastructure.

Portfolio Decision Journal

A journal tab connected to a second Sheet — where thesis notes, entry rationales, and decision records are logged with timestamps — would let the dashboard show not just current positions but the reasoning behind them and how that reasoning has evolved. For long-horizon investors, the decision log often carries more value than the current price.

Alert Integration via Telegram

The MarrSynth stack already has Telegram available as a native integration path. A scheduled Flask job running on Render's cron feature could check daily for new confirmed target hits, concentration limit breaches, or significant stress scenario changes and push a summary message automatically. The dashboard becomes a passive monitoring system rather than something requiring active checking.

The Broader Pattern

The financial portfolio is the example, not the point. The underlying pattern — structured data assessed by AI models against a defined thesis, surfaced as decision intelligence through a live dashboard — applies anywhere you have data with enough structure to support systematic evaluation.

A product team's feature backlog, scored by AI against strategic priorities and customer impact, becomes a prioritization dashboard. A vendor evaluation spreadsheet, assessed against weighted criteria by multiple AI models, becomes a procurement decision tool. A reading list, rated against a set of learning objectives, becomes a knowledge-gap tracker. The data layer, the scoring methodology, and the visualization architecture are identical in every case.

What changes is the domain knowledge that shapes the thesis — the criteria against which things are evaluated — and the human judgment that interprets the output. AI handles the systematic application of those criteria across a large universe. The framing, the decision thresholds, and the actions taken on the signals remain human decisions.

That is the design principle the dashboard is built around, and the one worth carrying forward into whatever dataset comes next.

Try the Dashboard — and What's Next

The live dashboard is at marrsynth.onrender.com/projects/financial-analytics. The data refreshes automatically from the source Sheet — what you see reflects the current state of the 2026 thesis portfolio, including any target hits confirmed since this article was published.

Two follow-up articles are planned for this thread. The first will cover how the 2026 investment thesis was constructed using AI: the process of identifying and refining the ten themes, the prompting methodology used to generate consistent FitScore assessments across five models, and how composite scoring was calibrated and validated. The second will go deep on the 21 stress scenarios: how they were designed to span the realistic outcome space, how AI was used to assign and refine scenario probabilities, and how the stress-weighted results changed the portfolio construction decisions that followed.

The source files behind the dashboard — financial-analytics.html and financial_analytics_route.py — follow the same pattern described here and can be adapted for any structured dataset. If you have a spreadsheet you have been wanting to turn into a live decision dashboard, the approach applies directly.


Part of the What AI Can Actually Build project series on MarrSynth. Related reading: How We Built This Site with Claude Code · Building and Monetizing AI Content Sites

Coming next: How We Built a 2026 Investment Thesis Using AI · 21 Stress Scenarios: Pressure-Testing a Portfolio with AI

→ View the live dashboard  ·  Questions or ideas for extensions? Get in touch.