Six Cortex AI functions. Five enterprise tables. A fully self-contained deployment — tested end-to-end, no workspace, no external files, no manual steps.

On April 21, 2026, Snowflake made something official that practitioners had been sensing for months: Snowflake Intelligence and Cortex Code are now the control plane for the agentic enterprise.
AI is no longer answering questions. It’s taking action.
The shift matters because of what it changes operationally. In the old model, AI lived at the edge — a separate service you called, waited on, and piped results back from. In the agentic model, AI is embedded inside your data platform, governed by the same RBAC that controls your tables, and callable from the same SQL your engineers already write every day.
In this guide, I’ll walk through building a production-grade Streamlit dashboard that puts all of this to work — and I mean production-grade in the literal sense. Every query, every procedure, every line of the Streamlit app in this article was tested end-to-end on a live Snowflake account before publication.
The complete package: five enterprise data tables with 70+ synthetic records, six Cortex AI functions demonstrated with real SQL, and a six-tab Streamlit app deployed using a technique I haven’t seen documented elsewhere — Python stored procedures that build the entire application line-by-line and write it directly to stage, using chr() substitution to handle SQL parser conflicts at runtime. No workspace. No external files. No manual saves. No rename step.
Run the worksheet. Get the app.
What We’re Building

Part 1: Database & Schema
CREATE DATABASE IF NOT EXISTS AGENTIC_ENTERPRISE_DEMO;
CREATE SCHEMA IF NOT EXISTS AGENTIC_ENTERPRISE_DEMO.APP_DATA;
Every object we create — tables, procedures, stage, the Streamlit app — lives inside AGENTIC_ENTERPRISE_DEMO.APP_DATA. Schema-level isolation means you can grant access independently, audit it separately, and tear it down cleanly with a single DROP SCHEMA when you're done.
Part 2: Five Enterprise Tables
Each table was designed to give one or more Cortex functions a natural, realistic home.
CREATE OR REPLACE TABLE AGENTIC_ENTERPRISE_DEMO.APP_DATA.CUSTOMER_FEEDBACK (
FEEDBACK_ID NUMBER AUTOINCREMENT PRIMARY KEY,
CUSTOMER_NAME VARCHAR(100),
COMPANY VARCHAR(100),
INDUSTRY VARCHAR(50),
FEEDBACK_TEXT VARCHAR(2000),
CHANNEL VARCHAR(30),
SUBMITTED_AT TIMESTAMP_NTZ DEFAULT CURRENT_TIMESTAMP(),
REGION VARCHAR(30)
);
CREATE OR REPLACE TABLE AGENTIC_ENTERPRISE_DEMO.APP_DATA.SUPPORT_TICKETS (
TICKET_ID NUMBER AUTOINCREMENT PRIMARY KEY,
CUSTOMER_NAME VARCHAR(100),
COMPANY VARCHAR(100),
SUBJECT VARCHAR(200),
DESCRIPTION VARCHAR(3000),
PRIORITY VARCHAR(20),
STATUS VARCHAR(20),
CREATED_AT TIMESTAMP_NTZ DEFAULT CURRENT_TIMESTAMP(),
CATEGORY VARCHAR(50)
);
CREATE OR REPLACE TABLE AGENTIC_ENTERPRISE_DEMO.APP_DATA.AI_ADOPTION_METRICS (
METRIC_ID NUMBER AUTOINCREMENT PRIMARY KEY,
ENTERPRISE_NAME VARCHAR(100),
INDUSTRY VARCHAR(50),
AI_USE_CASE VARCHAR(100),
ADOPTION_STAGE VARCHAR(30),
EFFICIENCY_GAIN_PCT FLOAT,
COST_REDUCTION_PCT FLOAT,
EMPLOYEES_IMPACTED NUMBER,
REPORT_DATE DATE
);
CREATE OR REPLACE TABLE AGENTIC_ENTERPRISE_DEMO.APP_DATA.WORKFORCE_IMPACT (
IMPACT_ID NUMBER AUTOINCREMENT PRIMARY KEY,
DEPARTMENT VARCHAR(50),
ROLE_CATEGORY VARCHAR(50),
AI_TOOL_USED VARCHAR(100),
HOURS_SAVED_WEEKLY FLOAT,
PRODUCTIVITY_INCREASE_PCT FLOAT,
TASKS_AUTOMATED NUMBER,
QUARTER VARCHAR(10),
YEAR NUMBER
);
CREATE OR REPLACE TABLE AGENTIC_ENTERPRISE_DEMO.APP_DATA.ENTERPRISE_NEWS (
ARTICLE_ID NUMBER AUTOINCREMENT PRIMARY KEY,
TITLE VARCHAR(300),
CONTENT VARCHAR(5000),
SOURCE VARCHAR(100),
PUBLISHED_DATE DATE,
LANGUAGE VARCHAR(20) DEFAULT 'en'
);
The five tables map directly to the six-tab dashboard:

Part 3: Synthetic Data — 70 Records Across 10 Fictitious Companies
Synthetic data isn’t random — it’s engineered: 10 fictitious enterprises across industries, sentiment spanning +0.87 to -0.30, urgency-skewed tickets (40/30/30), 60/40 production–pilot adoption, and cross-table coverage built for real AI insights.

3A: Customer Feedback — 12 records
INSERT INTO AGENTIC_ENTERPRISE_DEMO.APP_DATA.CUSTOMER_FEEDBACK
(CUSTOMER_NAME, COMPANY, INDUSTRY, FEEDBACK_TEXT, CHANNEL, SUBMITTED_AT, REGION)
VALUES
('Lena Hargrove', 'Meridian Services', 'BPO', 'Snowflake Intelligence has completely transformed how our team handles customer queries. The AI agent understands context and suggests responses that are spot-on. Our handle time dropped by 40% and customer satisfaction scores are at an all-time high. This is exactly what enterprise AI should look like.', 'Survey', '2026-04-15 10:30:00', 'EMEA'),
('Rafael Cortez', 'Pinnacle Devices', 'Consumer Electronics', 'The Cortex AI functions for sentiment analysis and entity extraction have been game-changers for our product team. We can now process millions of reviews across 30 markets in real-time. The insights are incredibly accurate and have directly influenced our product roadmap.', 'Email', '2026-04-14 14:22:00', 'NA'),
('Anika Sharma', 'Titan Equipment Co', 'Equipment Rental', 'Fleet management with Snowflake has been excellent overall. The predictive maintenance models reduced our downtime significantly. However, the initial setup was complex and required significant data engineering effort. Would appreciate more out-of-the-box templates for industrial use cases.', 'Support Portal', '2026-04-13 09:15:00', 'NA'),
('Klaus Brandt', 'NovaSilicon', 'Semiconductor', 'Cortex Code is exactly what our engineers needed. Writing simulation scripts is now 3x faster, and the code quality is consistently high. The governance features ensure our proprietary parameters stay secure. The only improvement I would suggest is better support for domain-specific semiconductor modeling languages.', 'Survey', '2026-04-12 16:45:00', 'NA'),
('Mei Taniguchi', 'Wayfinder Labs', 'Navigation', 'We integrated Snowflake Intelligence into our route optimization pipeline and the results exceeded expectations. The AI agent can reason over complex geospatial data and suggest optimal routes considering real-time traffic, weather, and road conditions. Truly impressive technology.', 'Email', '2026-04-11 11:00:00', 'APAC'),
('Sophie Landry', 'Ironclad Financial', 'Financial Services', 'The fraud detection capabilities powered by Cortex are outstanding. We caught 72% more fraudulent transactions in the first month alone. The real-time processing and low false-positive rate have saved us millions. Governance controls give our compliance team full confidence.', 'Survey', '2026-04-10 13:30:00', 'EMEA'),
('Connor Doyle', 'Evergreen Health', 'Healthcare', 'Clinical note summarization has been a lifesaver for our physicians. They spend 50% less time on documentation and more time with patients. The AI summaries are accurate and capture the key clinical details. We need HIPAA certification documentation to be more readily available though.', 'Support Portal', '2026-04-09 08:00:00', 'NA'),
('Jin Zhao', 'BrightCart Retail', 'Retail', 'Disappointed with the recommendation engine performance during peak holiday season. While it works great normally, we experienced significant latency issues when traffic spiked 10x. The system needs better auto-scaling capabilities for retail-specific demand patterns. Support was slow to respond.', 'Support Portal', '2026-04-08 20:15:00', 'APAC'),
('Diego Fuentes', 'Nexus Digital', 'Technology', 'Cortex Code has accelerated our development workflow dramatically. Automated code reviews catch issues our human reviewers miss, and the suggestions are contextually aware of our codebase. Integration with our CI/CD pipeline was seamless. Best developer tool investment we have made this year.', 'Email', '2026-04-07 15:00:00', 'LATAM'),
('Amira Khalil', 'Solaris Power', 'Energy', 'The predictive maintenance AI has been transformative for our wind farm operations. We can now predict turbine failures 2 weeks in advance with 92% accuracy. However, the model retraining process could be more automated. The current approach requires too much manual intervention from our data science team.', 'Survey', '2026-04-06 12:00:00', 'EMEA'),
('Nathan Park', 'Nexus Digital', 'Technology', 'The incident response automation needs significant improvement. False alerts are too frequent and the contextual understanding of our infrastructure is shallow. We had to build custom integrations on top to make it usable. The underlying platform is solid but the out-of-box experience for incident response falls short.', 'Support Portal', '2026-04-05 22:30:00', 'APAC'),
('Camila Ortega', 'BrightCart Retail', 'Retail', 'Inventory optimization AI has saved us 35% on carrying costs and virtually eliminated stockouts for our top 500 SKUs. The demand forecasting is remarkably accurate even for seasonal items. Snowflake platform makes it easy to connect our POS, warehouse, and supplier data in one place.', 'Email', '2026-04-04 10:00:00', 'LATAM');
3B: Support Tickets — 10 records
INSERT INTO AGENTIC_ENTERPRISE_DEMO.APP_DATA.SUPPORT_TICKETS
(CUSTOMER_NAME, COMPANY, SUBJECT, DESCRIPTION, PRIORITY, STATUS, CREATED_AT, CATEGORY)
VALUES
('Lena Hargrove', 'Meridian Services', 'Cortex Intelligence latency during peak hours', 'We are experiencing response times of 8-12 seconds during our peak customer service hours (9am-5pm GMT). Normal response time is under 2 seconds. This is impacting our SLA compliance. We need immediate investigation into the root cause. Our account has dedicated compute but it seems the queries are queuing.', 'Critical', 'In Progress', '2026-04-20 09:30:00', 'Performance'),
('Rafael Cortez', 'Pinnacle Devices', 'Sentiment analysis accuracy for Asian languages', 'The AI_SENTIMENT function works brilliantly for English and European languages but we are seeing only 65% accuracy for Japanese and Korean product reviews. We need at minimum 85% accuracy to replace our current third-party solution. Can we get access to fine-tuned models for these languages?', 'High', 'Open', '2026-04-19 14:00:00', 'Feature Request'),
('Derek Lawson', 'Titan Equipment Co', 'Data pipeline connectivity issue with IoT sensors', 'Our IoT sensor data pipeline from construction equipment is intermittently failing to load into Snowflake. We process approximately 500M sensor readings daily and are seeing 3-5% data loss during ingestion spikes. The Snowpipe notifications from our SQS queue are being acknowledged but data is not appearing in the target tables.', 'Critical', 'In Progress', '2026-04-18 07:45:00', 'Data Pipeline'),
('Klaus Brandt', 'NovaSilicon', 'Cortex Code suggestion quality for VHDL', 'Cortex Code provides excellent suggestions for Python and SQL but the quality for VHDL and SystemVerilog is not production-ready. We need better hardware description language support for our semiconductor design workflows. Can we provide custom training data to improve suggestions for domain-specific languages?', 'Medium', 'Open', '2026-04-17 11:20:00', 'Feature Request'),
('Anika Sharma', 'Titan Equipment Co', 'Billing discrepancy for Snowpark compute', 'We are seeing unexpected charges on our Snowpark Container Services bill. Our estimated monthly cost was $45K but we were billed $78K. The compute pool seems to be running even when no jobs are scheduled. Need detailed cost breakdown and investigation into the auto-suspend configuration.', 'High', 'Open', '2026-04-16 16:00:00', 'Billing'),
('Mei Taniguchi', 'Wayfinder Labs', 'Need geospatial function improvements for routing', 'The H3 and geography functions work well for basic routing but we need support for more complex geospatial operations like isochrone generation and multi-stop route optimization directly in SQL. Currently we have to extract data and process externally which adds 30 minutes to our pipeline.', 'Medium', 'Open', '2026-04-15 10:30:00', 'Feature Request'),
('Sophie Landry', 'Ironclad Financial', 'RBAC policy audit requirement for SOX compliance', 'We need to generate comprehensive audit reports showing all role grants, privilege assignments, and data access patterns for our SOX compliance review. The current ACCOUNT_USAGE views do not provide the complete historical lineage we need. Can you provide guidance on building a complete audit trail for all RBAC changes?', 'High', 'In Progress', '2026-04-14 13:00:00', 'Security'),
('Connor Doyle', 'Evergreen Health', 'HIPAA compliance documentation for Cortex functions', 'Our legal team requires formal documentation confirming that Cortex AI functions process PHI in compliance with HIPAA technical safeguards. Specifically we need confirmation about data encryption in transit and at rest during AI processing, audit logging of all PHI access, and BAA coverage for AI function usage.', 'Critical', 'Open', '2026-04-13 09:00:00', 'Compliance'),
('Jin Zhao', 'BrightCart Retail', 'Auto-scaling issues during flash sales', 'During our recent 48-hour flash sale event, the recommendation engine backed by Snowflake experienced severe degradation. Query latency went from 200ms to 15 seconds when concurrent user count exceeded 50K. The multi-cluster warehouse did not scale fast enough. We need guaranteed sub-second response times for up to 200K concurrent users.', 'Critical', 'In Progress', '2026-04-12 22:00:00', 'Performance'),
('Diego Fuentes', 'Nexus Digital', 'CI/CD integration for Cortex Code workflow', 'We want to integrate Cortex Code suggestions directly into our GitHub Actions CI/CD pipeline for automated code review on pull requests. Is there an API or webhook we can use to trigger Cortex Code analysis programmatically? We process approximately 200 PRs per day across 50 repositories.', 'Medium', 'Open', '2026-04-11 15:30:00', 'Integration');
3C: AI Adoption Metrics — 20 records
INSERT INTO AGENTIC_ENTERPRISE_DEMO.APP_DATA.AI_ADOPTION_METRICS
(ENTERPRISE_NAME, INDUSTRY, AI_USE_CASE, ADOPTION_STAGE, EFFICIENCY_GAIN_PCT, COST_REDUCTION_PCT, EMPLOYEES_IMPACTED, REPORT_DATE)
VALUES
('Meridian Services', 'BPO', 'Customer Service Automation', 'Production', 42.5, 28.3, 5200, '2026-04-01'),
('Meridian Services', 'BPO', 'Document Processing', 'Production', 55.0, 35.0, 3100, '2026-04-01'),
('Pinnacle Devices', 'Consumer Electronics', 'Product Feedback Analysis', 'Production', 68.0, 22.5, 850, '2026-04-01'),
('Pinnacle Devices', 'Consumer Electronics', 'Supply Chain Optimization', 'Pilot', 30.0, 18.0, 400, '2026-04-01'),
('Wayfinder Labs', 'Navigation Technology', 'Route Optimization AI', 'Production', 35.0, 20.0, 600, '2026-04-01'),
('Wayfinder Labs', 'Navigation Technology', 'Natural Language Search', 'Pilot', 25.0, 12.0, 200, '2026-04-01'),
('Titan Equipment Co', 'Equipment Rental', 'Fleet Maintenance Prediction', 'Production', 28.0, 32.0, 4500, '2026-04-01'),
('Titan Equipment Co', 'Equipment Rental', 'Demand Forecasting', 'Production', 40.0, 25.0, 2000, '2026-04-01'),
('NovaSilicon', 'Semiconductor', 'R&D Code Generation', 'Production', 60.0, 15.0, 350, '2026-04-01'),
('NovaSilicon', 'Semiconductor', 'Quality Inspection AI', 'Pilot', 45.0, 30.0, 500, '2026-04-01'),
('Ironclad Financial', 'Financial Services', 'Fraud Detection', 'Production', 72.0, 40.0, 3000, '2026-04-01'),
('Ironclad Financial', 'Financial Services', 'Regulatory Compliance', 'Pilot', 38.0, 22.0, 1200, '2026-04-01'),
('Evergreen Health', 'Healthcare', 'Clinical Note Summarization', 'Production', 50.0, 18.0, 8000, '2026-04-01'),
('Evergreen Health', 'Healthcare', 'Patient Triage Automation', 'Pilot', 35.0, 25.0, 2500, '2026-04-01'),
('BrightCart Retail', 'Retail', 'Personalized Recommendations', 'Production', 48.0, 20.0, 1500, '2026-04-01'),
('BrightCart Retail', 'Retail', 'Inventory Optimization', 'Production', 55.0, 35.0, 900, '2026-04-01'),
('Nexus Digital', 'Technology', 'Code Review Automation', 'Production', 65.0, 28.0, 2000, '2026-04-01'),
('Nexus Digital', 'Technology', 'Incident Response', 'Pilot', 40.0, 20.0, 800, '2026-04-01'),
('Solaris Power', 'Energy', 'Predictive Maintenance', 'Production', 52.0, 38.0, 6000, '2026-04-01'),
('Solaris Power', 'Energy', 'Grid Load Forecasting', 'Pilot', 30.0, 15.0, 1000, '2026-04-01');
3D: Workforce Impact — 20 records, Q1 + Q2 2026
INSERT INTO AGENTIC_ENTERPRISE_DEMO.APP_DATA.WORKFORCE_IMPACT
(DEPARTMENT, ROLE_CATEGORY, AI_TOOL_USED, HOURS_SAVED_WEEKLY, PRODUCTIVITY_INCREASE_PCT, TASKS_AUTOMATED, QUARTER, YEAR)
VALUES
('Engineering', 'Software Developer', 'Cortex Code', 12.5, 45.0, 8, 'Q1', 2026),
('Engineering', 'Data Engineer', 'Cortex Code', 10.0, 38.0, 6, 'Q1', 2026),
('Engineering', 'DevOps Engineer', 'Cortex Code', 8.0, 30.0, 5, 'Q1', 2026),
('Data Science', 'ML Engineer', 'Snowflake Intelligence', 15.0, 52.0, 10, 'Q1', 2026),
('Data Science', 'Data Analyst', 'Snowflake Intelligence', 18.0, 60.0, 12, 'Q1', 2026),
('Data Science', 'BI Developer', 'Snowflake Intelligence', 14.0, 48.0, 9, 'Q1', 2026),
('Customer Service', 'Support Agent', 'Snowflake Intelligence', 20.0, 40.0, 15, 'Q1', 2026),
('Customer Service', 'Team Lead', 'Snowflake Intelligence', 10.0, 35.0, 7, 'Q1', 2026),
('Marketing', 'Content Creator', 'Cortex AI Functions', 16.0, 55.0, 11, 'Q1', 2026),
('Marketing', 'Campaign Manager', 'Cortex AI Functions', 12.0, 42.0, 8, 'Q1', 2026),
('Finance', 'Financial Analyst', 'Snowflake Intelligence', 14.0, 45.0, 9, 'Q1', 2026),
('Finance', 'Risk Analyst', 'Snowflake Intelligence', 11.0, 38.0, 7, 'Q1', 2026),
('HR', 'Talent Acquisition', 'Cortex AI Functions', 9.0, 32.0, 6, 'Q1', 2026),
('HR', 'People Analytics', 'Snowflake Intelligence', 13.0, 50.0, 8, 'Q1', 2026),
('Operations', 'Supply Chain Mgr', 'Snowflake Intelligence', 17.0, 48.0, 10, 'Q1', 2026),
('Operations', 'Fleet Manager', 'Cortex AI Functions', 15.0, 42.0, 9, 'Q1', 2026),
('Engineering', 'Software Developer', 'Cortex Code', 14.0, 50.0, 10, 'Q2', 2026),
('Data Science', 'Data Analyst', 'Snowflake Intelligence', 20.0, 65.0, 14, 'Q2', 2026),
('Customer Service', 'Support Agent', 'Snowflake Intelligence', 22.0, 45.0, 17, 'Q2', 2026),
('Marketing', 'Content Creator', 'Cortex AI Functions', 18.0, 58.0, 13, 'Q2', 2026);
3E: Enterprise News — 8 articles in English, French, German
INSERT INTO AGENTIC_ENTERPRISE_DEMO.APP_DATA.ENTERPRISE_NEWS
(TITLE, CONTENT, SOURCE, PUBLISHED_DATE, LANGUAGE)
VALUES
('Snowflake Expands Intelligence and Cortex Code for the Agentic Enterprise', 'Snowflake today announced significant updates across Snowflake Intelligence and Cortex Code, advancing its vision to become the control plane for the agentic enterprise. As AI systems evolve from answering questions to taking action, these enhancements enable organizations to connect even more data sources, enterprise systems, and AI models with their trusted Snowflake data. Snowflake Intelligence now serves as a personal work agent for business users that adapts over time by learning individual preferences and workflows. Cortex Code is expanding as a builder layer for enterprise AI bringing governed data-native development across the enterprise data ecosystem.', 'Snowflake Newsroom', '2026-04-21', 'en'),
('Meridian Services Transforms Customer Operations with Snowflake AI', 'Meridian Services, a leading business process outsourcing company, has deployed Snowflake Intelligence across its customer service operations. The company reports a 40% reduction in average handle time and significant improvements in first-call resolution rates. By leveraging Cortex AI functions for real-time sentiment analysis and automated response suggestions, Meridian Services has empowered its agents to deliver more personalized customer experiences.', 'TechCrunch', '2026-04-20', 'en'),
('Les entreprises adoptent IA agentique pour transformer les operations', 'Les grandes entreprises europeennes accelerent leur adoption de intelligence artificielle agentique. Cette nouvelle generation de systemes AI ne se contente plus de repondre aux questions mais prend des actions concretes dans les flux de travail quotidiens. Les plateformes comme Snowflake Intelligence permettent aux utilisateurs metier de automatiser des taches repetitives tout en gardant le controle total sur la gouvernance des donnees.', 'Le Monde Tech', '2026-04-19', 'fr'),
('Pinnacle Devices Uses AI-Powered Analytics to Drive Product Innovation', 'Pinnacle Devices has integrated Snowflake Cortex functions into its product feedback pipeline, enabling real-time classification and entity extraction from millions of customer reviews across 30 markets. The system automatically categorizes feedback by product line, identifies emerging feature requests, and routes critical issues to engineering teams. This has reduced the time from customer insight to product action from weeks to hours.', 'Wired', '2026-04-18', 'en'),
('Die Zukunft der Unternehmensintelligenz liegt in agentischen KI-Systemen', 'Deutsche Unternehmen setzen zunehmend auf agentische KI-Systeme, die uber einfache Chatbots hinausgehen. Diese neuen Systeme konnen eigenstandig Aufgaben ausfuhren, Datenquellen verbinden und Entscheidungen vorbereiten. Snowflake Intelligence bietet eine einheitliche Plattform, auf der Geschaftsanwender mit ihren Daten interagieren und Aktionen uber Unternehmenssysteme hinweg auslosen konnen.', 'Handelsblatt', '2026-04-17', 'de'),
('Titan Equipment Co Achieves Operational Excellence with Agentic AI', 'Titan Equipment Co, the worlds largest equipment rental company, has deployed Snowflake Intelligence to optimize its fleet management and maintenance scheduling. The AI agent analyzes equipment sensor data, maintenance histories, and demand forecasts to proactively schedule maintenance, reducing equipment downtime by 28% and improving fleet utilization by 15%. The company credits the unified data platform for enabling seamless integration across its operational systems.', 'Bloomberg Technology', '2026-04-16', 'en'),
('NovaSilicon Accelerates Semiconductor R&D with Cortex Code', 'NovaSilicon, a leader in silicon carbide technology, has adopted Cortex Code to accelerate its semiconductor research and development workflows. Engineers now use AI-powered code generation to write and test simulation scripts 3x faster, while maintaining strict governance over proprietary process parameters. The tool integrates directly with their existing development environments, enabling a seamless transition from experimentation to production.', 'IEEE Spectrum', '2026-04-15', 'en'),
('Enterprise AI Market to Reach $500B by 2028 as Agentic Systems Scale', 'The global enterprise AI market is projected to reach $500 billion by 2028, driven primarily by the adoption of agentic AI systems that can autonomously execute complex business processes. Key growth areas include automated data analysis, intelligent workflow orchestration, and AI-powered customer engagement. Companies like Snowflake, Microsoft, and Google are competing to provide the foundational platforms for this transformation, with data governance and security emerging as critical differentiators.', 'Gartner Research', '2026-04-14', 'en');
Part 4: Cortex AI in Action
Before building the Streamlit app, run these standalone queries to see each function in action. They are the exact same queries the dashboard executes.
4A: CORTEX.SENTIMENT — Emotional tone at scale
Scores range from -1 (strongly negative) to +1 (strongly positive). Words like "transformed" and "game-changers" push scores high; "disappointed" and "latency issues" pull them down.
SELECT CUSTOMER_NAME, COMPANY,
SNOWFLAKE.CORTEX.SENTIMENT(FEEDBACK_TEXT) AS SENTIMENT_SCORE,
CASE WHEN SNOWFLAKE.CORTEX.SENTIMENT(FEEDBACK_TEXT) > 0.1 THEN 'Positive'
WHEN SNOWFLAKE.CORTEX.SENTIMENT(FEEDBACK_TEXT) < -0.1 THEN 'Negative'
ELSE 'Neutral' END AS SENTIMENT_LABEL
FROM AGENTIC_ENTERPRISE_DEMO.APP_DATA.CUSTOMER_FEEDBACK ORDER BY SENTIMENT_SCORE DESC;
4B: CORTEX.SUMMARIZE — Support ticket compression
A 300-word ticket description collapses to 2 sentences instantly. The highest immediate ROI of any Cortex function for support operations.
SELECT TICKET_ID, SUBJECT, PRIORITY,
SNOWFLAKE.CORTEX.SUMMARIZE(DESCRIPTION) AS AI_SUMMARY
FROM AGENTIC_ENTERPRISE_DEMO.APP_DATA.SUPPORT_TICKETS
WHERE PRIORITY IN ('Critical', 'High') ORDER BY CREATED_AT DESC;
4C: CORTEX.CLASSIFY_TEXT — Intelligent ticket routing
No training required. “Latency during peak hours” resolves to Performance; "billing discrepancy" resolves to Billing. The side-by-side with MANUAL_CATEGORY lets you measure routing agreement in SQL.
SELECT TICKET_ID, SUBJECT, CATEGORY AS MANUAL_CATEGORY,
SNOWFLAKE.CORTEX.CLASSIFY_TEXT(DESCRIPTION,
ARRAY_CONSTRUCT('Performance', 'Feature Request', 'Billing', 'Security', 'Compliance', 'Data Pipeline', 'Integration')
) AS AI_CLASSIFICATION
FROM AGENTIC_ENTERPRISE_DEMO.APP_DATA.SUPPORT_TICKETS ORDER BY TICKET_ID;
4D: CORTEX.EXTRACT_ANSWER — KPI extraction from documents
Natural language questions answered from freeform text, in SQL. Enterprise use case: pulling metrics from analyst reports and internal documentation at scale.
SELECT TITLE,
SNOWFLAKE.CORTEX.EXTRACT_ANSWER(CONTENT, 'What measurable business outcomes were achieved?') AS EXTRACTED_OUTCOMES
FROM AGENTIC_ENTERPRISE_DEMO.APP_DATA.ENTERPRISE_NEWS WHERE LANGUAGE = 'en' LIMIT 4;
4E: CORTEX.TRANSLATE — Multi-language analysis inside the governance boundary
No external API call. No data leaving Snowflake. The feature that matters most for compliance-sensitive industries where data residency is non-negotiable.
SELECT TITLE, LANGUAGE AS ORIGINAL_LANG,
SNOWFLAKE.CORTEX.TRANSLATE(CONTENT, LANGUAGE, 'en') AS ENGLISH_TRANSLATION
FROM AGENTIC_ENTERPRISE_DEMO.APP_DATA.ENTERPRISE_NEWS WHERE LANGUAGE != 'en';
4F: CORTEX.COMPLETE — LLM inference directly in SQL
Foundation model inference without leaving the warehouse. Parameterize the prompt with a CTE for a fully dynamic executive briefing that refreshes with your data.
SELECT SNOWFLAKE.CORTEX.COMPLETE('mistral-large2',
'Based on this enterprise AI adoption data, write a 3-sentence executive summary:
- Meridian Services: 42.5% efficiency gain in Customer Service (Production)
- Pinnacle Devices: 68% efficiency gain in Product Feedback Analysis (Production)
- Titan Equipment Co: 28% efficiency gain in Fleet Maintenance (Production)
- NovaSilicon: 60% efficiency gain in R&D Code Generation (Production)
- Ironclad Financial: 72% efficiency gain in Fraud Detection (Production)
Focus on the business impact and readiness for production.') AS EXECUTIVE_BRIEFING;Part 5: Cross-Table Analytics
Single-table AI queries are useful. Cross-table patterns are where enterprise dashboards earn their keep.
5A: Sentiment vs Business Outcomes
SELECT f.COMPANY, f.INDUSTRY,
ROUND(AVG(SNOWFLAKE.CORTEX.SENTIMENT(f.FEEDBACK_TEXT)), 3) AS AVG_SENTIMENT,
ROUND(AVG(a.EFFICIENCY_GAIN_PCT), 1) AS AVG_EFFICIENCY_GAIN,
ROUND(AVG(a.COST_REDUCTION_PCT), 1) AS AVG_COST_REDUCTION
FROM AGENTIC_ENTERPRISE_DEMO.APP_DATA.CUSTOMER_FEEDBACK f
JOIN AGENTIC_ENTERPRISE_DEMO.APP_DATA.AI_ADOPTION_METRICS a ON f.COMPANY = a.ENTERPRISE_NAME
GROUP BY f.COMPANY, f.INDUSTRY ORDER BY AVG_SENTIMENT DESC;
5B: Workforce Productivity by AI Tool
SELECT DEPARTMENT, AI_TOOL_USED,
ROUND(AVG(HOURS_SAVED_WEEKLY), 1) AS AVG_HOURS_SAVED,
ROUND(AVG(PRODUCTIVITY_INCREASE_PCT), 1) AS AVG_PRODUCTIVITY_BOOST,
SUM(TASKS_AUTOMATED) AS TOTAL_TASKS
FROM AGENTIC_ENTERPRISE_DEMO.APP_DATA.WORKFORCE_IMPACT
WHERE QUARTER = 'Q1' AND YEAR = 2026
GROUP BY DEPARTMENT, AI_TOOL_USED ORDER BY AVG_HOURS_SAVED DESC;
5C: Support Ticket Priority Heat Map
AI urgency scoring catches tickets marked Medium that read like Critical. The most negative scores are your highest-risk open items.
SELECT COMPANY, CATEGORY, PRIORITY, COUNT(*) AS TICKET_COUNT,
ROUND(AVG(SNOWFLAKE.CORTEX.SENTIMENT(DESCRIPTION)), 3) AS AVG_URGENCY_SCORE
FROM AGENTIC_ENTERPRISE_DEMO.APP_DATA.SUPPORT_TICKETS
GROUP BY COMPANY, CATEGORY, PRIORITY ORDER BY AVG_URGENCY_SCORE ASC;
5D: Multi-Language News Sentiment Tracker
SELECT TITLE, SOURCE, LANGUAGE, PUBLISHED_DATE,
ROUND(SNOWFLAKE.CORTEX.SENTIMENT(CONTENT), 3) AS SENTIMENT,
CASE WHEN SNOWFLAKE.CORTEX.SENTIMENT(CONTENT) > 0.5 THEN 'Very Positive'
WHEN SNOWFLAKE.CORTEX.SENTIMENT(CONTENT) > 0.1 THEN 'Positive'
WHEN SNOWFLAKE.CORTEX.SENTIMENT(CONTENT) > -0.1 THEN 'Neutral'
ELSE 'Negative' END AS SENTIMENT_BAND
FROM AGENTIC_ENTERPRISE_DEMO.APP_DATA.ENTERPRISE_NEWS ORDER BY PUBLISHED_DATE DESC;
Part 6: Streamlit Deployment — Tested & Verified
Most Streamlit-on-Snowflake setups rely on manual steps — uploading files, managing environments, and repeating everything after every change. This approach removes all of that.
Instead, two Python stored procedures generate and write the entire app directly inside Snowflake at runtime. The Streamlit code is built line-by-line using safe character handling (chr(36)*2, chr(39), chr(34)*3) to avoid SQL parser issues, then stored in an APP_SOURCE table for built-in version control.
No external files. No renaming. No rework. Just one worksheet to deploy everything
Step 1: Create the Stage
CREATE OR REPLACE STAGE AGENTIC_ENTERPRISE_DEMO.APP_DATA.STREAMLIT_STAGE
COMMENT = 'Stage for Agentic Enterprise Streamlit app files';
Step 2: Create the APP_SOURCE Helper Table
This table persists the generated app source inside Snowflake — giving you version history and the ability to inspect the generated code without going to the stage.
CREATE OR REPLACE TABLE AGENTIC_ENTERPRISE_DEMO.APP_DATA.APP_SOURCE (
FILE_NAME VARCHAR(200),
CONTENT VARCHAR(16777216)
);
Step 3: Write environment.yml to Stage
CREATE OR REPLACE PROCEDURE AGENTIC_ENTERPRISE_DEMO.APP_DATA.WRITE_ENV_FILE()
RETURNS STRING
LANGUAGE PYTHON
RUNTIME_VERSION = '3.11'
PACKAGES = ('snowflake-snowpark-python')
HANDLER = 'run'
EXECUTE AS CALLER
AS
$$
import tempfile, os, shutil
def run(session):
content = """name: sf_env
channels:
- snowflake
dependencies:
- altair
- streamlit
- snowflake-snowpark-python
"""
tmpdir = tempfile.mkdtemp()
fpath = os.path.join(tmpdir, 'environment.yml')
with open(fpath, 'w') as f:
f.write(content)
session.file.put(fpath, '@AGENTIC_ENTERPRISE_DEMO.APP_DATA.STREAMLIT_STAGE', auto_compress=False, overwrite=True)
shutil.rmtree(tmpdir)
return 'environment.yml uploaded to stage'
$$;
CALL AGENTIC_ENTERPRISE_DEMO.APP_DATA.WRITE_ENV_FILE();
shutil.rmtree(tmpdir) for cleaner temp directory handling, and now writing environment.yml directly—eliminating the rename step entirely.
Step 4: Build and Upload the Complete Streamlit App
This is the core technique.
The entire app is constructed line-by-line inside a Python list L. To avoid conflicts with Snowflake’s $$ SQL delimiter, special characters are generated at runtime using chr():
- DQ = chr(36)*2 → handles $$ (Snowflake dollar-quoting)
- NL = chr(10) → joins lines with newlines
- Q3 = chr(34)*3 → supports Python triple-quoted strings
- SQ = chr(39) → safe single quotes for SQL literals
Once assembled, the code is written to a temp file (agentic_enterprise_app.py), PUT directly to stage, and also inserted into APP_SOURCE—giving you built-in source persistence inside Snowflake.
No file juggling. No external dependency. Just clean, controlled deployment.
CREATE OR REPLACE PROCEDURE AGENTIC_ENTERPRISE_DEMO.APP_DATA.WRITE_APP_FILE()
RETURNS STRING
LANGUAGE PYTHON
RUNTIME_VERSION = '3.11'
PACKAGES = ('snowflake-snowpark-python')
HANDLER = 'run'
EXECUTE AS CALLER
AS
$$
import tempfile, os, shutil
def run(session):
DQ = chr(36)*2
NL = chr(10)
Q3 = chr(34)*3
SQ = chr(39)
L = []
L.append('import streamlit as st')
L.append('from snowflake.snowpark.context import get_active_session')
L.append('from snowflake.snowpark import functions as F')
L.append('import pandas as pd')
L.append('import altair as alt')
L.append('import json')
L.append('')
L.append('st.set_page_config(page_title="Agentic Enterprise Dashboard", page_icon="\\U0001f3e2", layout="wide")')
L.append('')
L.append('session = get_active_session()')
L.append('')
L.append('DB = "AGENTIC_ENTERPRISE_DEMO"')
L.append('SCHEMA = "APP_DATA"')
L.append('')
L.append('CUSTOM_CSS = ' + Q3)
L.append('<style>')
L.append(' .main-header { font-size: 2.2rem; font-weight: 700; color: #1E3A5F; margin-bottom: 0.2rem; }')
L.append(' .sub-header { font-size: 1.0rem; color: #6B7280; margin-bottom: 1.5rem; }')
L.append(' .metric-card {')
L.append(' background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);')
L.append(' padding: 1.2rem; border-radius: 12px; color: white; text-align: center;')
L.append(' }')
L.append(' .metric-value { font-size: 2rem; font-weight: 700; }')
L.append(' .metric-label { font-size: 0.85rem; opacity: 0.85; }')
L.append(' .section-divider { border-top: 2px solid #E5E7EB; margin: 2rem 0; }')
L.append('</style>')
L.append(Q3)
L.append('st.markdown(CUSTOM_CSS, unsafe_allow_html=True)')
L.append('')
L.append('st.markdown(' + SQ + '<div class="main-header">Snowflake Agentic Enterprise Control Plane</div>' + SQ + ', unsafe_allow_html=True)')
L.append('st.markdown(' + SQ + '<div class="sub-header">Powered by Snowflake Intelligence & Cortex Code \\u2014 Real-time AI insights across your enterprise</div>' + SQ + ', unsafe_allow_html=True)')
L.append('')
L.append('tabs = st.tabs([')
L.append(' "\\U0001f4ca Executive Overview",')
L.append(' "\\U0001f9e0 Cortex AI Lab",')
L.append(' "\\U0001f4ac Voice of Customer",')
L.append(' "\\U0001f3ab Support Intelligence",')
L.append(' "\\U0001f465 Workforce Impact",')
L.append(' "\\U0001f4f0 News Intelligence"')
L.append('])')
L.append('')
L.append(f'DQ = chr(36)*2')
L.append(f'SQ = chr(39)')
L.append('')
# TAB 1: Executive Overview
L.append('with tabs[0]:')
L.append(' adoption_df = session.sql(f"SELECT * FROM {DB}.{SCHEMA}.AI_ADOPTION_METRICS ORDER BY ENTERPRISE_NAME").to_pandas()')
L.append(' c1, c2, c3, c4 = st.columns(4)')
L.append(' with c1:')
L.append(' st.metric("Enterprises", adoption_df["ENTERPRISE_NAME"].nunique())')
L.append(" with c2:")
L.append(" st.metric(\"Avg Efficiency Gain\", f\"{adoption_df['EFFICIENCY_GAIN_PCT'].mean():.1f}%\")")
L.append(" with c3:")
L.append(" st.metric(\"Total Employees Impacted\", f\"{adoption_df['EMPLOYEES_IMPACTED'].sum():,}\")")
L.append(" with c4:")
L.append(' st.metric("Production Use Cases", len(adoption_df[adoption_df["ADOPTION_STAGE"] == "Production"]))')
L.append(' st.markdown("---")')
L.append(' col_left, col_right = st.columns(2)')
L.append(' with col_left:')
L.append(' st.subheader("AI Adoption by Industry")')
L.append(' industry_chart = alt.Chart(adoption_df).mark_bar(cornerRadiusTopLeft=6, cornerRadiusTopRight=6).encode(')
L.append(' x=alt.X("INDUSTRY:N", sort="-y", title="Industry"),')
L.append(' y=alt.Y("mean(EFFICIENCY_GAIN_PCT):Q", title="Avg Efficiency Gain (%)"),')
L.append(' color=alt.Color("ADOPTION_STAGE:N", scale=alt.Scale(scheme="tableau10")),')
L.append(' tooltip=["INDUSTRY", "ADOPTION_STAGE", "mean(EFFICIENCY_GAIN_PCT)"]')
L.append(' ).properties(height=350)')
L.append(' st.altair_chart(industry_chart, use_container_width=True)')
L.append(' with col_right:')
L.append(' st.subheader("Cost Reduction vs Efficiency Gain")')
L.append(' scatter = alt.Chart(adoption_df).mark_circle(size=100, opacity=0.7).encode(')
L.append(' x=alt.X("EFFICIENCY_GAIN_PCT:Q", title="Efficiency Gain (%)"),')
L.append(' y=alt.Y("COST_REDUCTION_PCT:Q", title="Cost Reduction (%)"),')
L.append(' color=alt.Color("INDUSTRY:N", scale=alt.Scale(scheme="tableau10")),')
L.append(' size=alt.Size("EMPLOYEES_IMPACTED:Q", legend=None),')
L.append(' tooltip=["ENTERPRISE_NAME", "AI_USE_CASE", "EFFICIENCY_GAIN_PCT", "COST_REDUCTION_PCT", "EMPLOYEES_IMPACTED"]')
L.append(' ).properties(height=350)')
L.append(' st.altair_chart(scatter, use_container_width=True)')
L.append(' st.subheader("Enterprise AI Adoption Details")')
L.append(' st.dataframe(adoption_df[["ENTERPRISE_NAME", "INDUSTRY", "AI_USE_CASE", "ADOPTION_STAGE",')
L.append(' "EFFICIENCY_GAIN_PCT", "COST_REDUCTION_PCT", "EMPLOYEES_IMPACTED"]],')
L.append(' use_container_width=True, hide_index=True)')
L.append('')
# TAB 2: Cortex AI Lab
L.append('with tabs[1]:')
L.append(' st.subheader("Interactive Cortex AI Function Playground")')
L.append(' st.caption("Test Snowflake Cortex AI functions on live data or custom text")')
L.append(' ai_func = st.selectbox("Select Cortex Function", [')
L.append(' "SNOWFLAKE.CORTEX.SENTIMENT", "SNOWFLAKE.CORTEX.SUMMARIZE",')
L.append(' "SNOWFLAKE.CORTEX.CLASSIFY_TEXT", "SNOWFLAKE.CORTEX.EXTRACT_ANSWER",')
L.append(' "SNOWFLAKE.CORTEX.TRANSLATE", "SNOWFLAKE.CORTEX.COMPLETE"')
L.append(' ])')
L.append(' if ai_func == "SNOWFLAKE.CORTEX.SENTIMENT":')
L.append(' user_text = st.text_area("Enter text for sentiment analysis:",')
L.append(' value="Snowflake Intelligence has completely transformed how our team handles customer queries. The results are phenomenal.", height=100)')
L.append(' if st.button("Analyze Sentiment", key="sentiment_btn"):')
L.append(' with st.spinner("Running Cortex Sentiment..."):')
L.append(f' result = session.sql(f"SELECT SNOWFLAKE.CORTEX.SENTIMENT(' + '{DQ}{user_text}{DQ}' + ') AS SCORE").to_pandas()')
L.append(' score = float(result["SCORE"].iloc[0])')
L.append(' st.metric("Sentiment Score", f"{score:.4f}", delta="Positive" if score > 0.1 else ("Negative" if score < -0.1 else "Neutral"))')
L.append(' st.progress(min(max((score + 1) / 2, 0.0), 1.0))')
L.append(' elif ai_func == "SNOWFLAKE.CORTEX.SUMMARIZE":')
L.append(' user_text = st.text_area("Enter text to summarize:",')
L.append(' value="Snowflake today announced significant updates across Snowflake Intelligence and Cortex Code, advancing its vision to become the control plane for the agentic enterprise.", height=150)')
L.append(' if st.button("Summarize", key="summarize_btn"):')
L.append(' with st.spinner("Running Cortex Summarize..."):')
L.append(f' result = session.sql(f"SELECT SNOWFLAKE.CORTEX.SUMMARIZE(' + '{DQ}{user_text}{DQ}' + ') AS SUMMARY").to_pandas()')
L.append(' st.info(result["SUMMARY"].iloc[0])')
L.append(' elif ai_func == "SNOWFLAKE.CORTEX.CLASSIFY_TEXT":')
L.append(' user_text = st.text_area("Enter text to classify:",')
L.append(' value="Our recommendation engine experienced severe latency issues during the flash sale event.", height=100)')
L.append(' categories = st.text_input("Categories (comma-separated):", value="Performance Issue, Feature Request, Billing, Security, Bug Report")')
L.append(' if st.button("Classify", key="classify_btn"):')
L.append(" cat_list = [c.strip() for c in categories.split(',')]")
L.append(""" cat_array = ", ".join([f"'{c}'" for c in cat_list])""")
L.append(' with st.spinner("Running Cortex Classify..."):')
L.append(f' result = session.sql(f"SELECT SNOWFLAKE.CORTEX.CLASSIFY_TEXT(' + '{DQ}{user_text}{DQ}' + ', ARRAY_CONSTRUCT({cat_array})) AS RESULT").to_pandas()')
L.append(' parsed = json.loads(result["RESULT"].iloc[0])')
L.append(""" st.success(f"**Classification:** {parsed.get('label', 'N/A')}")""")
L.append(' st.json(parsed)')
L.append(' elif ai_func == "SNOWFLAKE.CORTEX.EXTRACT_ANSWER":')
L.append(' source_text = st.text_area("Source text:",')
L.append(' value="Titan Equipment Co deployed Snowflake Intelligence to optimize fleet management. The AI agent analyzes equipment sensor data and maintenance histories. This reduced equipment downtime by 28% and improved fleet utilization by 15%.", height=120)')
L.append(' question = st.text_input("Question:", value="What was the reduction in equipment downtime?")')
L.append(' if st.button("Extract Answer", key="extract_btn"):')
L.append(' with st.spinner("Running Cortex Extract Answer..."):')
L.append(f' result = session.sql(f"SELECT SNOWFLAKE.CORTEX.EXTRACT_ANSWER(' + '{DQ}{source_text}{DQ}' + ', ' + '{DQ}{question}{DQ}' + ') AS ANSWER").to_pandas()')
L.append(""" st.success(f"**Answer:** {result['ANSWER'].iloc[0]}")""")
L.append(' elif ai_func == "SNOWFLAKE.CORTEX.TRANSLATE":')
L.append(' user_text = st.text_area("Enter text to translate:",')
L.append(' value="Snowflake delivers agentic AI for both business users and builders on a single platform.", height=100)')
L.append(' target_lang = st.selectbox("Target language:", ["fr", "de", "es", "ja", "ko", "pt", "it", "zh"])')
L.append(' if st.button("Translate", key="translate_btn"):')
L.append(' with st.spinner("Running Cortex Translate..."):')
L.append(" safe_text = user_text.replace(chr(39), chr(39)+chr(39))")
L.append(' result = session.sql(f"SELECT SNOWFLAKE.CORTEX.TRANSLATE({DQ}{safe_text}{DQ}, ' + "'" + 'en' + "'" + ', ' + "'" + '{target_lang}' + "'" + ') AS TRANSLATED").to_pandas()')
L.append(""" st.info(result["TRANSLATED"].iloc[0])""")
L.append(' elif ai_func == "SNOWFLAKE.CORTEX.COMPLETE":')
L.append(' model = st.selectbox("Model:", ["snowflake-arctic", "mistral-large2", "llama3.1-70b", "llama3.1-8b"])')
L.append(' prompt = st.text_area("Prompt:", value="Explain how agentic AI differs from traditional AI assistants in 3 concise bullet points.", height=100)')
L.append(' if st.button("Generate", key="complete_btn"):')
L.append(' with st.spinner(f"Running Cortex Complete ({model})..."):')
L.append(" safe_prompt = prompt.replace(chr(39), chr(39)+chr(39))")
L.append(' result = session.sql(f"SELECT SNOWFLAKE.CORTEX.COMPLETE(' + "'" + '{model}' + "'" + ', {DQ}{safe_prompt}{DQ}) AS RESPONSE").to_pandas()')
L.append(' st.markdown(result["RESPONSE"].iloc[0])')
L.append('')
# TAB 3: Voice of Customer
L.append('with tabs[2]:')
L.append(' st.subheader("AI-Powered Customer Feedback Analysis")')
L.append(' feedback_df = session.sql(f' + Q3 + 'SELECT FEEDBACK_ID, CUSTOMER_NAME, COMPANY, INDUSTRY, FEEDBACK_TEXT, CHANNEL, REGION, SNOWFLAKE.CORTEX.SENTIMENT(FEEDBACK_TEXT) AS SENTIMENT_SCORE FROM {DB}.{SCHEMA}.CUSTOMER_FEEDBACK ORDER BY SUBMITTED_AT DESC' + Q3 + ').to_pandas()')
L.append(' feedback_df["SENTIMENT_LABEL"] = feedback_df["SENTIMENT_SCORE"].apply(lambda x: "Positive" if float(x) > 0.1 else ("Negative" if float(x) < -0.1 else "Neutral"))')
L.append(' feedback_df["SENTIMENT_SCORE"] = feedback_df["SENTIMENT_SCORE"].astype(float)')
L.append(' m1, m2, m3 = st.columns(3)')
L.append(' with m1:')
L.append(""" st.metric("Average Sentiment", f"{feedback_df['SENTIMENT_SCORE'].mean():.3f}")""")
L.append(' with m2:')
L.append(' pos_pct = len(feedback_df[feedback_df["SENTIMENT_LABEL"] == "Positive"]) / len(feedback_df) * 100')
L.append(' st.metric("Positive Feedback", f"{pos_pct:.0f}%")')
L.append(' with m3:')
L.append(' neg_pct = len(feedback_df[feedback_df["SENTIMENT_LABEL"] == "Negative"]) / len(feedback_df) * 100')
L.append(' st.metric("Negative Feedback", f"{neg_pct:.0f}%")')
L.append(' col_a, col_b = st.columns(2)')
L.append(' with col_a:')
L.append(' st.subheader("Sentiment Distribution")')
L.append(' sent_chart = alt.Chart(feedback_df).mark_bar(cornerRadiusTopLeft=6, cornerRadiusTopRight=6).encode(x=alt.X("SENTIMENT_LABEL:N", title="Sentiment"), y=alt.Y("count():Q", title="Count"), color=alt.Color("SENTIMENT_LABEL:N", scale=alt.Scale(domain=["Positive", "Neutral", "Negative"], range=["#22c55e", "#f59e0b", "#ef4444"]))).properties(height=300)')
L.append(' st.altair_chart(sent_chart, use_container_width=True)')
L.append(' with col_b:')
L.append(' st.subheader("Sentiment by Company")')
L.append(' company_sent = alt.Chart(feedback_df).mark_bar(cornerRadiusTopLeft=4, cornerRadiusTopRight=4).encode(y=alt.Y("COMPANY:N", sort="-x", title="Company"), x=alt.X("mean(SENTIMENT_SCORE):Q", title="Avg Sentiment Score"), color=alt.condition(alt.datum["mean_SENTIMENT_SCORE"] > 0, alt.value("#22c55e"), alt.value("#ef4444")), tooltip=["COMPANY", "mean(SENTIMENT_SCORE)"]).properties(height=300)')
L.append(' st.altair_chart(company_sent, use_container_width=True)')
L.append(' st.subheader("Detailed Feedback with AI Analysis")')
L.append(' for _, row in feedback_df.iterrows():')
L.append(' sentiment_color = "#22c55e" if row["SENTIMENT_LABEL"] == "Positive" else ("#ef4444" if row["SENTIMENT_LABEL"] == "Negative" else "#f59e0b")')
L.append(""" with st.expander(f"**{row['CUSTOMER_NAME']}** ({row['COMPANY']}) \\u2014 :{sentiment_color}[{row['SENTIMENT_LABEL']}] ({row['SENTIMENT_SCORE']:.3f})"):""")
L.append(' st.write(row["FEEDBACK_TEXT"])')
L.append(""" st.caption(f"Channel: {row['CHANNEL']} | Region: {row['REGION']} | Industry: {row['INDUSTRY']}")""")
L.append('')
# TAB 4: Support Intelligence
L.append('with tabs[3]:')
L.append(' st.subheader("AI-Powered Support Ticket Analysis")')
L.append(' tickets_df = session.sql(f' + Q3 + 'SELECT TICKET_ID, CUSTOMER_NAME, COMPANY, SUBJECT, DESCRIPTION, PRIORITY, STATUS, CATEGORY, CREATED_AT, SNOWFLAKE.CORTEX.SENTIMENT(DESCRIPTION) AS URGENCY_SCORE, SNOWFLAKE.CORTEX.SUMMARIZE(DESCRIPTION) AS AI_SUMMARY FROM {DB}.{SCHEMA}.SUPPORT_TICKETS ORDER BY CREATED_AT DESC' + Q3 + ').to_pandas()')
L.append(' tickets_df["URGENCY_SCORE"] = tickets_df["URGENCY_SCORE"].astype(float)')
L.append(' tc1, tc2, tc3, tc4 = st.columns(4)')
L.append(' with tc1:')
L.append(' st.metric("Open Tickets", len(tickets_df[tickets_df["STATUS"] == "Open"]))')
L.append(' with tc2:')
L.append(' st.metric("Critical", len(tickets_df[tickets_df["PRIORITY"] == "Critical"]))')
L.append(' with tc3:')
L.append(' st.metric("In Progress", len(tickets_df[tickets_df["STATUS"] == "In Progress"]))')
L.append(' with tc4:')
L.append(""" st.metric("Avg Urgency", f"{tickets_df['URGENCY_SCORE'].mean():.2f}")""")
L.append(' col1, col2 = st.columns(2)')
L.append(' with col1:')
L.append(' st.subheader("Tickets by Priority")')
L.append(' priority_chart = alt.Chart(tickets_df).mark_arc(innerRadius=50).encode(theta=alt.Theta("count():Q"), color=alt.Color("PRIORITY:N", scale=alt.Scale(domain=["Critical", "High", "Medium"], range=["#ef4444", "#f59e0b", "#3b82f6"])), tooltip=["PRIORITY", "count()"]).properties(height=300)')
L.append(' st.altair_chart(priority_chart, use_container_width=True)')
L.append(' with col2:')
L.append(' st.subheader("Tickets by Category")')
L.append(' cat_chart = alt.Chart(tickets_df).mark_bar(cornerRadiusTopLeft=6, cornerRadiusTopRight=6).encode(x=alt.X("CATEGORY:N", sort="-y", title="Category"), y=alt.Y("count():Q", title="Count"), color=alt.Color("CATEGORY:N", scale=alt.Scale(scheme="tableau10"), legend=None), tooltip=["CATEGORY", "count()"]).properties(height=300)')
L.append(' st.altair_chart(cat_chart, use_container_width=True)')
L.append(' st.subheader("Ticket Details with AI Summaries")')
L.append(' for _, row in tickets_df.iterrows():')
L.append(' priority_icon = "\\U0001f534" if row["PRIORITY"] == "Critical" else ("\\U0001f7e1" if row["PRIORITY"] == "High" else "\\U0001f535")')
L.append(""" with st.expander(f"{priority_icon} [{row['TICKET_ID']}] {row['SUBJECT']} \\u2014 {row['COMPANY']} ({row['STATUS']})"):""")
L.append(""" st.markdown(f"**AI Summary:** {row['AI_SUMMARY']}")""")
L.append(' st.markdown("---")')
L.append(' st.write(row["DESCRIPTION"])')
L.append(""" st.caption(f"Priority: {row['PRIORITY']} | Category: {row['CATEGORY']} | Created: {row['CREATED_AT']}")""")
L.append('')
# TAB 5: Workforce Impact
L.append('with tabs[4]:')
L.append(' st.subheader("AI Impact on Workforce Productivity")')
L.append(' workforce_df = session.sql(f"SELECT * FROM {DB}.{SCHEMA}.WORKFORCE_IMPACT ORDER BY DEPARTMENT, ROLE_CATEGORY").to_pandas()')
L.append(' wm1, wm2, wm3, wm4 = st.columns(4)')
L.append(' with wm1:')
L.append(""" st.metric("Avg Hours Saved/Week", f"{workforce_df['HOURS_SAVED_WEEKLY'].mean():.1f}")""")
L.append(' with wm2:')
L.append(""" st.metric("Avg Productivity Boost", f"{workforce_df['PRODUCTIVITY_INCREASE_PCT'].mean():.1f}%")""")
L.append(' with wm3:')
L.append(""" st.metric("Total Tasks Automated", f"{workforce_df['TASKS_AUTOMATED'].sum()}")""")
L.append(' with wm4:')
L.append(' st.metric("Departments Impacted", workforce_df["DEPARTMENT"].nunique())')
L.append(' col_w1, col_w2 = st.columns(2)')
L.append(' with col_w1:')
L.append(' st.subheader("Hours Saved by Department")')
L.append(' dept_hours = alt.Chart(workforce_df).mark_bar(cornerRadiusTopLeft=6, cornerRadiusTopRight=6).encode(x=alt.X("DEPARTMENT:N", sort="-y", title="Department"), y=alt.Y("sum(HOURS_SAVED_WEEKLY):Q", title="Total Hours Saved/Week"), color=alt.Color("AI_TOOL_USED:N", scale=alt.Scale(scheme="tableau10")), tooltip=["DEPARTMENT", "AI_TOOL_USED", "sum(HOURS_SAVED_WEEKLY)"]).properties(height=350)')
L.append(' st.altair_chart(dept_hours, use_container_width=True)')
L.append(' with col_w2:')
L.append(' st.subheader("Productivity Increase by Role")')
L.append(' role_prod = alt.Chart(workforce_df).mark_bar(cornerRadiusTopLeft=4, cornerRadiusTopRight=4).encode(y=alt.Y("ROLE_CATEGORY:N", sort="-x", title="Role"), x=alt.X("mean(PRODUCTIVITY_INCREASE_PCT):Q", title="Avg Productivity Increase (%)"), color=alt.Color("AI_TOOL_USED:N", scale=alt.Scale(scheme="tableau10")), tooltip=["ROLE_CATEGORY", "AI_TOOL_USED", "mean(PRODUCTIVITY_INCREASE_PCT)"]).properties(height=350)')
L.append(' st.altair_chart(role_prod, use_container_width=True)')
L.append(' st.subheader("Quarterly Comparison (Q1 vs Q2 2026)")')
L.append(' q1 = workforce_df[workforce_df["QUARTER"] == "Q1"][["DEPARTMENT", "ROLE_CATEGORY", "HOURS_SAVED_WEEKLY", "PRODUCTIVITY_INCREASE_PCT", "TASKS_AUTOMATED"]].rename(columns={"HOURS_SAVED_WEEKLY": "Q1_HOURS", "PRODUCTIVITY_INCREASE_PCT": "Q1_PRODUCTIVITY", "TASKS_AUTOMATED": "Q1_TASKS"})')
L.append(' q2 = workforce_df[workforce_df["QUARTER"] == "Q2"][["DEPARTMENT", "ROLE_CATEGORY", "HOURS_SAVED_WEEKLY", "PRODUCTIVITY_INCREASE_PCT", "TASKS_AUTOMATED"]].rename(columns={"HOURS_SAVED_WEEKLY": "Q2_HOURS", "PRODUCTIVITY_INCREASE_PCT": "Q2_PRODUCTIVITY", "TASKS_AUTOMATED": "Q2_TASKS"})')
L.append(' comparison = q1.merge(q2, on=["DEPARTMENT", "ROLE_CATEGORY"], how="inner")')
L.append(' if not comparison.empty:')
L.append(' comparison["HOURS_DELTA"] = comparison["Q2_HOURS"] - comparison["Q1_HOURS"]')
L.append(' comparison["PRODUCTIVITY_DELTA"] = comparison["Q2_PRODUCTIVITY"] - comparison["Q1_PRODUCTIVITY"]')
L.append(' st.dataframe(comparison, use_container_width=True, hide_index=True)')
L.append('')
# TAB 6: News Intelligence
L.append('with tabs[5]:')
L.append(' st.subheader("AI-Powered Enterprise News Analysis")')
L.append(' news_df = session.sql(f' + Q3 + 'SELECT ARTICLE_ID, TITLE, CONTENT, SOURCE, PUBLISHED_DATE, LANGUAGE, SNOWFLAKE.CORTEX.SENTIMENT(CONTENT) AS SENTIMENT, SNOWFLAKE.CORTEX.SUMMARIZE(CONTENT) AS SUMMARY FROM {DB}.{SCHEMA}.ENTERPRISE_NEWS ORDER BY PUBLISHED_DATE DESC' + Q3 + ').to_pandas()')
L.append(' news_df["SENTIMENT"] = news_df["SENTIMENT"].astype(float)')
L.append(' nm1, nm2, nm3 = st.columns(3)')
L.append(' with nm1:')
L.append(' st.metric("Total Articles", len(news_df))')
L.append(' with nm2:')
L.append(' st.metric("Languages", news_df["LANGUAGE"].nunique())')
L.append(' with nm3:')
L.append(""" st.metric("Avg Sentiment", f"{news_df['SENTIMENT'].mean():.3f}")""")
L.append(' st.subheader("Translate Non-English Articles")')
L.append(' non_eng = news_df[news_df["LANGUAGE"] != "en"]')
L.append(' if not non_eng.empty:')
L.append(' for _, row in non_eng.iterrows():')
L.append(""" with st.expander(f"\\U0001f310 [{row['LANGUAGE'].upper()}] {row['TITLE']} \\u2014 {row['SOURCE']}"):""")
L.append(""" st.write(f"**Original ({row['LANGUAGE']}):** {row['CONTENT'][:300]}...")""")
L.append(" safe_content = row['CONTENT'].replace(chr(39), chr(39)+chr(39))")
L.append(" safe_lang = row['LANGUAGE'].replace(chr(39), chr(39)+chr(39))")
L.append(""" translated = session.sql(f"SELECT SNOWFLAKE.CORTEX.TRANSLATE('{safe_content}', '{safe_lang}', 'en') AS T").to_pandas()""")
L.append(""" st.success(f"**English Translation:** {translated['T'].iloc[0]}")""")
L.append(' st.subheader("All Articles with AI Summaries")')
L.append(' for _, row in news_df.iterrows():')
L.append(' sent_color = "\\U0001f7e2" if row["SENTIMENT"] > 0.1 else ("\\U0001f534" if row["SENTIMENT"] < -0.1 else "\\U0001f7e1")')
L.append(""" with st.expander(f"{sent_color} {row['TITLE']} \\u2014 {row['SOURCE']} ({row['PUBLISHED_DATE']})"):""")
L.append(""" st.markdown(f"**AI Summary:** {row['SUMMARY']}")""")
L.append(' st.markdown("---")')
L.append(' st.write(row["CONTENT"])')
L.append(""" st.caption(f"Language: {row['LANGUAGE']} | Sentiment: {row['SENTIMENT']:.3f}")""")
L.append('')
L.append('st.markdown("---")')
L.append('st.caption("Agentic Enterprise Control Plane | Powered by Snowflake Intelligence, Cortex Code & Cortex AI Functions | April 2026")')
content = NL.join(L)
# Save to APP_SOURCE table
session.sql("DELETE FROM AGENTIC_ENTERPRISE_DEMO.APP_DATA.APP_SOURCE WHERE FILE_NAME = 'agentic_enterprise_app.py'").collect()
session.sql(f"INSERT INTO AGENTIC_ENTERPRISE_DEMO.APP_DATA.APP_SOURCE VALUES ('agentic_enterprise_app.py', {DQ}{content}{DQ})").collect()
# Write to stage
tmpdir = tempfile.mkdtemp()
fpath = os.path.join(tmpdir, 'agentic_enterprise_app.py')
with open(fpath, 'w') as f:
f.write(content)
session.file.put(fpath, '@AGENTIC_ENTERPRISE_DEMO.APP_DATA.STREAMLIT_STAGE', auto_compress=False, overwrite=True)
shutil.rmtree(tmpdir)
return f'agentic_enterprise_app.py uploaded to stage ({len(content)} bytes)'
$$;
CALL AGENTIC_ENTERPRISE_DEMO.APP_DATA.WRITE_APP_FILE();
Steps 6–8: Verify, Create, Confirm
-- Verify both files are on stage
LIST @AGENTIC_ENTERPRISE_DEMO.APP_DATA.STREAMLIT_STAGE;
-- Create the Streamlit app object
CREATE OR REPLACE STREAMLIT AGENTIC_ENTERPRISE_DEMO.APP_DATA.AGENTIC_ENTERPRISE_DASHBOARD
ROOT_LOCATION = '@AGENTIC_ENTERPRISE_DEMO.APP_DATA.STREAMLIT_STAGE'
MAIN_FILE = 'agentic_enterprise_app.py'
QUERY_WAREHOUSE = 'COMPUTE_WH'
TITLE = 'Agentic Enterprise Control Plane'
COMMENT = 'Production-grade dashboard: Snowflake Intelligence & Cortex Code for the Agentic Enterprise';
-- Confirm deployment
SHOW STREAMLITS IN SCHEMA AGENTIC_ENTERPRISE_DEMO.APP_DATA;
The LIST output should show agentic_enterprise_app.py and environment.yml exactly as intended—no temp prefixes, no renaming. This works because the procedure writes files using their final names upfront via os.path.join(tmpdir, 'agentic_enterprise_app.py') before the PUT step.
The Deployment Pattern in Context
Three ideas make this pattern worth borrowing:
1. chr() substitution avoids SQL delimiter conflicts
Embedding Snowflake SQL ($$) inside Python stored procedures (also $$) usually breaks parsing. By defining DQ = chr(36)*2 and constructing dollar-quotes at runtime, the parser never sees raw $$—no conflict, no errors.
2. APP_SOURCE = built-in version control
Each run of WRITE_APP_FILE() replaces the previous version (DELETE + INSERT). You can inspect the latest app instantly with:
SELECT CONTENT FROM APP_SOURCE WHERE FILE_NAME = 'agentic_enterprise_app.py'
No stage browsing. No external workspace.
3. Clean temp handling with shutil.rmtree
Instead of deleting files one-by-one, shutil.rmtree(tmpdir) removes the entire temp directory in one go—safer and more reliable in shared Snowpark runtime environments.
Part 7: Streamlit App Live









Part 8: Cleanup
-- Option A: App + stage + procedures only (data tables preserved)
DROP STREAMLIT IF EXISTS AGENTIC_ENTERPRISE_DEMO.APP_DATA.AGENTIC_ENTERPRISE_DASHBOARD;
DROP STAGE IF EXISTS AGENTIC_ENTERPRISE_DEMO.APP_DATA.STREAMLIT_STAGE;
DROP PROCEDURE IF EXISTS AGENTIC_ENTERPRISE_DEMO.APP_DATA.WRITE_ENV_FILE();
DROP PROCEDURE IF EXISTS AGENTIC_ENTERPRISE_DEMO.APP_DATA.WRITE_APP_FILE();
DROP TABLE IF EXISTS AGENTIC_ENTERPRISE_DEMO.APP_DATA.APP_SOURCE;
-- Option B: Full cleanup - removes everything
DROP STREAMLIT IF EXISTS AGENTIC_ENTERPRISE_DEMO.APP_DATA.AGENTIC_ENTERPRISE_DASHBOARD;
DROP STAGE IF EXISTS AGENTIC_ENTERPRISE_DEMO.APP_DATA.STREAMLIT_STAGE;
DROP PROCEDURE IF EXISTS AGENTIC_ENTERPRISE_DEMO.APP_DATA.WRITE_ENV_FILE();
DROP PROCEDURE IF EXISTS AGENTIC_ENTERPRISE_DEMO.APP_DATA.WRITE_APP_FILE();
DROP TABLE IF EXISTS AGENTIC_ENTERPRISE_DEMO.APP_DATA.APP_SOURCE;
DROP TABLE IF EXISTS AGENTIC_ENTERPRISE_DEMO.APP_DATA.CUSTOMER_FEEDBACK;
DROP TABLE IF EXISTS AGENTIC_ENTERPRISE_DEMO.APP_DATA.SUPPORT_TICKETS;
DROP TABLE IF EXISTS AGENTIC_ENTERPRISE_DEMO.APP_DATA.AI_ADOPTION_METRICS;
DROP TABLE IF EXISTS AGENTIC_ENTERPRISE_DEMO.APP_DATA.WORKFORCE_IMPACT;
DROP TABLE IF EXISTS AGENTIC_ENTERPRISE_DEMO.APP_DATA.ENTERPRISE_NEWS;
DROP SCHEMA IF EXISTS AGENTIC_ENTERPRISE_DEMO.APP_DATA;
DROP DATABASE IF EXISTS AGENTIC_ENTERPRISE_DEMO;
What We Actually Built
From a single worksheet — tested end-to-end:
- 5 enterprise tables · 70 records · 10 companies · 10 industries
Designed with intentional distributions so every Cortex AI demo produces meaningful, varied output - 6 Cortex AI functions
Production-grade SQL patterns you can reuse in real pipelines - 4 cross-table analytics queries
Blending AI scoring with traditional aggregations - 2 Python stored procedures
WRITE_ENV_FILE (conda config) + WRITE_APP_FILE (full Streamlit app)
Built line-by-line using chr() substitution, persisted to APP_SOURCE, and PUT directly to stage—no rename step - 1 six-tab Streamlit app
Includes an interactive Cortex AI Lab — no SQL required for business users
Six functions. Seventy records. Five tables. Four queries. Two procedures. One app.
The agentic enterprise isn’t coming — it’s already here.
Snowflake Intelligence + Cortex Code is your AI control plane — now you have a complete, tested blueprint to build on it.
Resources
- 📘 Cortex AI Functions: docs.snowflake.com/cortex
- 🧠 Snowflake Intelligence: snowflake.com/intelligence
- 🏗️ Streamlit on Snowflake: docs.snowflake.com/streamlit
Follow Snowflake Chronicles for practitioner-first Snowflake content every week.
📘 Medium: @snowflakechronicles 🔗 LinkedIn: satishkumar-snowflake
#Snowflake #CortexAI #SnowflakeIntelligence #DataEngineering #Streamlit #CortexCode #AgenticAI #DataCloud #SnowflakeChronicles
Disclaimer: All companies, data, and metrics are fictitious and for illustration only. Results may vary by Snowflake setup, and Cortex features/models can change — verify with official docs.
All the Views are my own, not related to any current or past employer.
Snowflake · Cortex AI · Data Engineering · Streamlit · Agentic AI
Building the Agentic Enterprise Control Plane on Snowflake was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.