<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Diffuse AI]]></title><description><![CDATA[A publication focused on telling stories and publishing analysis about how AI is diffusing into the real world today]]></description><link>https://www.diffuseai.pub</link><generator>Substack</generator><lastBuildDate>Tue, 28 Apr 2026 07:12:54 GMT</lastBuildDate><atom:link href="https://www.diffuseai.pub/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Charles Yang]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[diffuseai@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[diffuseai@substack.com]]></itunes:email><itunes:name><![CDATA[Charles Yang]]></itunes:name></itunes:owner><itunes:author><![CDATA[Charles Yang]]></itunes:author><googleplay:owner><![CDATA[diffuseai@substack.com]]></googleplay:owner><googleplay:email><![CDATA[diffuseai@substack.com]]></googleplay:email><googleplay:author><![CDATA[Charles Yang]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Democratizing Discovery: Large Language Models (LLMs), Hackathons, and the Future of Materials Science and Chemistry Research]]></title><description><![CDATA[Or: How To Get More Scientists to Build with AI Agents]]></description><link>https://www.diffuseai.pub/p/accelerating-ai-diffusion-for-materials</link><guid isPermaLink="false">https://www.diffuseai.pub/p/accelerating-ai-diffusion-for-materials</guid><dc:creator><![CDATA[Ben Blaiszik]]></dc:creator><pubDate>Thu, 23 Apr 2026 15:47:47 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4ccd39ac-b586-4657-9f9f-3ba9604528a3_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p style="text-align: justify;">Large language models (LLMs) are rapidly changing scientific research. <a href="https://www.anthropic.com/news/accelerating-scientific-research">Anthropic</a>, <a href="https://arxiv.org/abs/2511.16072">OpenAI</a>, <a href="https://arxiv.org/abs/2602.03837">Google</a>, <a href="https://arxiv.org/abs/2505.13400">FutureHouse</a>, and others have all shared recent work documenting the expansive scope. The claims range from e.g., <a href="https://edisonscientific.com/articles/announcing-kosmos">compressing six person-months of research into a day</a>; <a href="https://www.isomorphiclabs.com/articles/the-isomorphic-labs-drug-design-engine-unlocks-a-new-frontier">modeling how small molecules interact with proteins with dramatically increased speed and precision</a>; and <a href="https://www.wired.com/story/a-new-ai-math-ai-startup-just-cracked-4-previously-unsolved-problems/">solving long-standing mathematical conjectures</a>. Beyond headline-grabbing scientific results LLMs are reshaping the day-to-day work with Novo Nordisk reporting usage of Claude to <a href="https://claude.com/customers/novo-nordisk">reduce paperwork overhead</a> dramatically. This is happening now, and if the models and tooling keep improving it will only accelerate.</p><p style="text-align: justify;">Research in materials science and chemistry presents a unique opportunity with specific bottlenecks that align with what LLMs are unusually well-suited to address. The friction points are addressable, and the vision of what is possible on the other side: better batteries, new classes of drugs, lighter structural materials, more efficient catalysts, is deeply important to the future of material abundance, economic prosperity, and good health that we hope for.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.diffuseai.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Diffuse AI! Subscribe to get future pieces in your inbox.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2 style="text-align: justify;">The Bottlenecks are Specific and Addressable</h2><p style="text-align: justify;">Work in materials science and chemistry spans extraordinary ranges in length scales (from a few atoms to manufactured components), times scales (from femtosecond reaction dynamics to years for material degradation), and methodologies (from quantum chemistry simulations to bench-scale synthesis and production scale up). The result is exceptional heterogeneity in data, tools, and workflows. A team optimizing a catalyst might need to combine campaigns of quantum simulations and manage data from dozens of instruments with differing file formats &#8212; all while synthesizing their own knowledge with figures, tables, and text from thousands of papers of varying quality.</p><p style="text-align: justify;">Further, while materials science and chemistry have generated large datasets and repositories (e.g., <a href="https://next-gen.materialsproject.org">Materials Project</a>, <a href="https://oqmd.org">OQMD</a>, <a href="https://nomad-lab.eu/nomad-lab/">NOMAD</a>, <a href="https://huggingface.co/facebook/OMol25">OMol25</a>, the <a href="https://www.materialsdatafacility.org">Materials Data Facility</a> and more - see <a href="https://github.com/blaiszik/awesome-matchem-datasets">this list of 100s of other resources</a>) there isn&#8217;t an equivalent to GenBank for genomics or Protein Databank for protein science because of the exceptional heterogeneity of materials and chemistry data. As such, there is no universal structured repository of synthesis procedures or process-structure-property relationships or molecular equivalent, and instead that information is often contained in personal expertise, millions of papers, figures, tables, hundreds of data resources, notebooks, and  various repositories.</p><p style="text-align: justify;">Researchers still spend significant time on mind-numbing tasks like data entry, extracting information from papers by hand, converting files between formats, and writing documentation and reports. They unfortunately must spend time on the small, grinding tasks that have nothing to do with the actual creative work of science but consume enormous chunks of mental capacity.</p><p style="text-align: justify;">Dealing with such data and software heterogeneity and varying workflows are tasks that LLMs are particularly good at. LLMs have the potential to become connective tissue &#8211; a universal interface layer &#8211; because they are particularly adept at translating between heterogeneities in e.g., human intent/machine actions, multiple schemas, inputs/outputs across tools, and even narrative scientific context and structured data procedures. With such a broad new class of capabilities, there is a transformational opportunity for this connective tissue to be built between fragmented software, infrastructure, databases, papers, and more that currently don&#8217;t talk to each other. In such a vast application space of rapidly emerging LLM capabilities, it would take years to fully understand, specify, prioritize, and build such infrastructure if we waited for traditional research projects to be funded for this kind of work.</p><p style="text-align: justify;">Instead, we decided to see what tools scientists could build for themselves through a hackathon.</p><h2 style="text-align: justify;">The Hackathons: 2000 researchers across 4 continents</h2><p style="text-align: justify;">The first <a href="https://llmhackathon.github.io/about/">LLM Hackathon for Applications in Materials Science and Chemistry</a> was held in 2023,  and for this event, we didn&#8217;t know if anyone would show up or if the tools were mature enough to build anything real in 24 hours. It also wasn&#8217;t assured that materials scientists and chemists would have the requisite software development skills. But people showed up, and the latent capability to build was evident almost immediately. Many had never built an agentic system or worked with an LLM before.</p><p style="text-align: justify;">Over three events, in <a href="https://doi.org/10.1039/D3DD00113J">2023</a> and subsequently <a href="https://doi.org/10.1088/2632-2153/ae011a">2024</a>, and <a href="https://llmhackathon.github.io">2025</a> (see figure),  more than 2,000 participants, primarily graduate students, created over 170 publicly documented, open-source projects. The 2025 event alone had 16 in-person sites, plus an online hub for worldwide access. We provide in detail, a <a href="https://llmhackathon.github.io">description of all of the projects and links here</a>, but in short, they built complex software including: natural language interfaces that let non-experts control advanced instruments; agentic workflows that compress the time from idea to experiment; automated data management systems that make it possible to share data across labs; new ways to train researchers; new generative and predictive models; tools to extract structured information from a corpus of research papers;  and much more.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kfCv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7adbf34-c0fe-4860-b3fa-c2d91fe7a7c0_1200x630.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kfCv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7adbf34-c0fe-4860-b3fa-c2d91fe7a7c0_1200x630.png 424w, https://substackcdn.com/image/fetch/$s_!kfCv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7adbf34-c0fe-4860-b3fa-c2d91fe7a7c0_1200x630.png 848w, https://substackcdn.com/image/fetch/$s_!kfCv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7adbf34-c0fe-4860-b3fa-c2d91fe7a7c0_1200x630.png 1272w, https://substackcdn.com/image/fetch/$s_!kfCv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7adbf34-c0fe-4860-b3fa-c2d91fe7a7c0_1200x630.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kfCv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7adbf34-c0fe-4860-b3fa-c2d91fe7a7c0_1200x630.png" width="1200" height="630" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b7adbf34-c0fe-4860-b3fa-c2d91fe7a7c0_1200x630.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:630,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:355913,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.diffuseai.pub/i/194118149?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7adbf34-c0fe-4860-b3fa-c2d91fe7a7c0_1200x630.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kfCv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7adbf34-c0fe-4860-b3fa-c2d91fe7a7c0_1200x630.png 424w, https://substackcdn.com/image/fetch/$s_!kfCv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7adbf34-c0fe-4860-b3fa-c2d91fe7a7c0_1200x630.png 848w, https://substackcdn.com/image/fetch/$s_!kfCv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7adbf34-c0fe-4860-b3fa-c2d91fe7a7c0_1200x630.png 1272w, https://substackcdn.com/image/fetch/$s_!kfCv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7adbf34-c0fe-4860-b3fa-c2d91fe7a7c0_1200x630.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p style="text-align: justify;">These events produced moments of connection, learning, and building. This included teams spanning the globe and formed in interesting ways, e.g., researchers in India partnering with researchers in the United States and midnight &#8220;aha&#8221; moments on Slack that put a team on a prize-winning path. We purposefully designed this not as only a single day event, but as an ongoing community where these moments have a space to continue, enabling researchers to catalyze startups and worldwide collaborations, land fellowships, write papers together, and make discoveries of their own.</p><p style="text-align: justify;">In these events, we primarily focused on building and learning. To be clear, these projects are prototypes, not yet production systems. But many of them are <em>really good</em> prototypes, and the speed at which they were built tells us something important about where the technology is right now and the latent capability of researchers to adapt these tools.</p><p style="text-align: justify;">We next describe the outcomes and what we learned in more detail. We group the projects into four buckets for convenience, though many projects span multiple categories including:</p><ol><li><p style="text-align: justify;"><strong>Prediction and design</strong>: using LLMs alongside traditional ML to e.g., predict material properties or generate candidate structures or other distributions, especially when data are scarce.</p></li><li><p style="text-align: justify;"><strong>Interfaces and automation</strong>: conversational control of instruments, simulations, databases, and development of closed-loop systems that compress the hypothesis-to-validation cycle.</p></li><li><p style="text-align: justify;"><strong>Data management and knowledge extraction</strong>: compiling structured, computable data out of the messy world of PDFs, lab notebooks, patents, lightly documented datasets, and inconsistent databases.</p></li><li><p style="text-align: justify;"><strong>Education and scientific communication</strong>: tutoring platforms, virtual lab simulators, tools for creating explanatory content to connect with audiences ranging from aspiring students to domain experts.</p></li></ol><h3 style="text-align: justify;">Prediction and design: LLMs as complements, not replacements</h3><p style="text-align: justify;">In materials science and chemistry, researchers are often working in low data regimes and with heterogeneous data as discussed previously. Importantly, the projects showed the strength of LLMs to work under such conditions.</p><p style="text-align: justify;">The <a href="https://www.youtube.com/watch?v=Aw0dAoU7v10">LLM4ConProp</a> team explored whether LLMs can directly predict a material property of concrete. They assembled two curated datasets totaling nearly 4,000 records of concrete compressive strength and material composition and processing, and ran a head-to-head comparison of GPT-4.1 in zero-shot and few-shot settings versus tree-based models (random forests, XGBoost, LightGBM). The results showed that with enough in-context examples, the LLM approached ML baselines for low data regimes.</p><p style="text-align: justify;">Unsurprisingly, LLMs also work well with multimodal and heterogeneous inputs. Researchers may wish to provide natural language description of desired properties, leverage research papers, or describe material or molecular characteristics and generate structures, synthesis pathways, or other distributions. The <a href="https://github.com/pagel-s/MIDAS">MIDAS</a> team built a prototype showing structure-based drug design by conditioning a diffusion model on both protein pocket geometry and natural language instructions. With this system,  users can say things like &#8220;generate a molecule with a hydroxyl group targeting this binding site&#8221; and get candidates that reflect both the structural constraints and chemical intuition. To train this, the team generated approximately one million molecule-text description pairs using GPT-3.5, covering functional groups, molecular properties, and pharmacophore descriptions. The whole system was wrapped in a chat interface with additional tools for docking analysis, similarity search, retrosynthesis, and iterative refinement through conversational feedback. Another project, <a href="https://github.com/hspark1212/synthesis-agent">SKY</a>, tackled the inverse problem of how to make a material given specific property targets. The system takes a natural language description of a target structure, runs recursive similarity searches across Materials Project data, and uses LLM reasoning to propose grounded synthesis routes.</p><div id="youtube2-ffLqLH87yLo" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;ffLqLH87yLo&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/ffLqLH87yLo?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p style="text-align: justify;">These projects illustrate how LLMs are effective in prediction and design especially when paired with traditional ML models as complements and to provide interfaces that handle the messy, heterogeneous inputs, outputs from these models, and to handle cross-service data connections. As foundation models improve at reasoning over mixed modalities like text, structures, spectra, images, this complementary role is likely to expand, particularly in the low-data regimes that define much of materials science and chemistry.</p><div><hr></div><h3 style="text-align: justify;">Interfaces and automation: making billion-dollar infrastructure more accessible</h3><p style="text-align: justify;">The US spends billions annually on national user facilities - synchrotrons, neutron sources, nanocenters, high-end electron microscopes, exascale supercomputers - that are too large and specialized for any single university to host. These are some of the most powerful scientific instruments on the planet, including publicly funded supercomputing facilities that provide compute access to researchers nationwide.  Far fewer researchers use them than should, largely because accessing them requires months of specialized training. Making these facilities accessible without months of training could have outsized impact.</p><p style="text-align: justify;">Natural language interfaces offer a clear path forward. Some of the most common applications run on scientific supercomputing are density functional theory (DFT) calculations and molecular dynamics (MD), but running simulation campaigns at high-performance computing facilities requires understanding of the specific system you are running on, optimizing for scale across many nodes, carefully monitoring the jobs, and more. <a href="https://github.com/BigDFT-group/llm-hackathon-2025">LARA-HPC</a> built an assistant that translates scientific goals, e.g., &#8220;compute the atomization energy of HCN&#8221;, into complete HPC DFT workflows. Similarly <a href="https://github.com/ncsu-llm-hackathon-materials-2025/MINT-LLM">MINT LLM</a> created natural language interfaces for MD analysis across codes.</p><p style="text-align: justify;">Automated labs are gaining increased prominence within national user facilities and industry. The <a href="https://www.youtube.com/watch?v=bMx332SAWv4">ACME</a> team connected molecular design, quantum chemistry, and simulated robotic experimentation in a single feedback loop for discovering molecules for critical materials extraction. A reasoning model retrieves domain knowledge, e.g., ligand design rules, coordination geometry constraints from curated publications, then constructs candidate molecules. Those candidates pass through automated computational screening where the system builds 3D structures, runs semi-empirical quantum chemistry (GFN-xTB), and ranks by metal-ion binding affinity. In this hackathon, the top candidates didn&#8217;t enter the physical world, but rather a virtual stand-in that is  being developed to synthesize and characterize coatings of these future molecules. Simulation results feed back to the reasoning model, which updates design rules and proposes improved candidates.</p><div id="youtube2-bMx332SAWv4" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;bMx332SAWv4&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/bMx332SAWv4?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p style="text-align: justify;">These prototypes are directly aligned with major national investments. The <a href="https://genesis.energy.gov">US Genesis Mission</a>, launched by executive order in late 2025, aims to connect DOE&#8217;s 17 national laboratories, <a href="https://www.energy.gov/science/office-science-user-facilities">user facilities</a> and scientific datasets into an integrated discovery engine. Canada&#8217;s Acceleration Consortium, various efforts in Europe and Asia, and industry efforts from Lila, Cusp, DeepMind, Periodic Labs, Radical AI, and others are pursuing parallel ambitions spanning automated labs and scientific intelligence. The hackathon projects show that the research community is ready to build on this infrastructure,  and that many of these ambitious goals may be closer than expected. These working prototypes also point to a future where scientific user facilities have new interfaces for improved accessibility and efficiency.</p><div><hr></div><h3 style="text-align: justify;">Data management and knowledge extraction: liberating a century of science from PDFs and notebooks</h3><p style="text-align: justify;">Federal mandates increasingly require that scientific data be Findable, Accessible, Interoperable, and Reusable. In practice, that compliance falls disproportionately on graduate students and postdocs, the benefit to any single researcher rarely feels worth the effort, and the result is repositories that are technically FAIR but still unusable. LLMs offer a way to fundamentally shift this burden.</p><p style="text-align: justify;">LLMs offer a way to fundamentally shift this burden and capture structured data more efficiently. For example, <a href="https://www.youtube.com/watch?v=AUU9osunIuw">ExpAlign</a> created a pipeline that parses hundreds of research PDFs, extracts key properties and hidden experimental details, flags inconsistencies, and combines results into clean, ML-ready data tables. <a href="https://github.com/zakidotai/MuMMIE">MuMMIE</a> tackled multilingual patent extraction across five languages, creating benchmarks for cross-lingual scientific information extraction. <a href="https://www.youtube.com/watch?v=yduuYq5Egg0">SuperconLLM</a> built a four-agent pipeline that screens arXiv papers, performs named entity recognition and relation extraction, and generates structured database entries for superconductor properties. PolyNexus created a domain-specific knowledge base for electroactive polymers where you can ask natural language queries (&#8221;What is the conductivity of PEDOT:PSS?&#8221;) and get citation-backed results.</p><p style="text-align: justify;">This extraction and information access at scale, when coupled with private sector efforts (e.g., Edison Scientific/Future House, Chemical Abstracts Service, ChemDataExtractor, Citrine Informatics), promises to collate and liberate hundreds of years of humanity&#8217;s collective scientific knowledge from PDF files into something structured, computable, and accessible data sources.</p><div><hr></div><h2 style="text-align: justify;">Education and scientific communication: the parts nobody talks about</h2><p style="text-align: justify;">Research impact depends not just on discovery but on how effectively knowledge transfers to the next generation of researchers and diffuses to adjacent fields, and to the public. These are areas where researchers typically receive little training and have few tools, and where LLMs may have an outsized impact.</p><p style="text-align: justify;">For example, <a href="https://huggingface.co/Abbasaabdul/AI_ChemTutor/tree/main">ChemTutor AI</a> built a domain-specific tutoring platform that generates personalized problems with interactive 3D molecular visualizations and adaptive difficulty. It provides pedagogical scaffolding including e.g., step-by-step reasoning, worked examples, and Socratic questioning. <a href="https://www.youtube.com/watch?v=1KUYHbP1Bm4">MatSci LapLab</a>, built a tool to simulate characterization techniques like TGA, SEM, and tensile testing, enabling students to train virtually.</p><div id="youtube2-s1nscrTJIa8" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;s1nscrTJIa8&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/s1nscrTJIa8?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p style="text-align: justify;">On the communication side, <a href="https://github.com/vrindaasomjit/atomic-shorts">AtomicShorts</a> built a three-agent pipeline that helps scientists create short explanatory videos at a fraction of commercial costs and at different audience knowledge levels. By keeping scientists in the loop, the system ensures accuracy while reducing the enormous gap between researchers and the public.</p><p style="text-align: justify;">These tools address an asymmetry, i.e., researchers are trained and incentivized to do science but less so to teach or communicate efficiently. If LLMs can lower the cost of creating high-quality educational and explanatory content, the pool of people who engage with, and eventually contribute to, scientific research could increase significantly.</p><div><hr></div><h2 style="text-align: justify;">The hackathon model: why this worked and how to replicate it</h2><p style="text-align: justify;">The hackathon series was, itself, an experimental test of what we call  the &#8220;Social-First Hackathon Model&#8221;. This model is one that many research communities could replicate, and in fact, has already been used for <a href="https://kaliningroup.github.io/mic-hackathon/">two hackathons in microscopy</a> and one in <a href="https://ac-bo-hackathon.github.io">Bayesian optimization</a>.</p><p style="text-align: justify;">We designed the model around five principles:</p><ol><li><p style="text-align: justify;"><strong>Public and social by default.</strong> Teams submit entries via social media, LinkedIn, X, YouTube, with accompanying code repositories and a Google Forms entry. The social posts create immediate public visibility, provide verifiable credit for CVs, and eliminate the need for custom infrastructure.</p></li><li><p style="text-align: justify;"><strong>Minimal central coordination.</strong> The entire event runs on free tools like Slack, Zoom, GitHub, Luma, and Google Workspace. Participants form teams, decide topics, and plan projects autonomously. Two to three central organizers coordinated an event for over a thousand participants.</p></li><li><p style="text-align: justify;"><strong>Hybrid from the start.</strong> A global virtual cohort participates alongside physical hubs at institutions that volunteer to host. This decentralizes operations while maintaining a unified program and allows the best talent from anywhere to participate.</p></li><li><p style="text-align: justify;"><strong>Time-boxed intensity.</strong> The 24h constraint forces rapid prioritization, prevents scope creep, allows researchers to participate without compromising their ongoing research, and creates urgency.</p></li><li><p style="text-align: justify;"><strong>Leaning into academic incentives.</strong> Participants are primarily graduate students and postdocs. Co-authorship on papers, presentation opportunities, awards for CVs matter. After each event, we assembled teams to write articles including as many active participants as possible.</p></li></ol><p style="text-align: justify;">Three lessons emerged across events. <strong>First, team formation before the event is critical.</strong> The most common failure mode is participants showing up without teammates and being unable to crystallize both a team and a concept. The 2025 event addressed this by emphasizing early skill-matching for over a month before the event, using the shared Slack, virtual meetings, and a custom Miro board.</p><p style="text-align: justify;"><strong>Second, domain breadth within teams matters.</strong> The events have no limits on team formation. Teams can be as big or small as needed, and combine in-person and virtual. Many of the strongest teams brought together broad teams including e.g., computational chemists, experimentalists, and ML engineers.</p><p style="text-align: justify;"><strong>Third, the community persists.</strong> Over 1,400 researchers remained active in shared Slack channels after the events concluded, continuing collaborations, posting job opportunities, and building on each other&#8217;s work. The hackathon is a nucleation point that brings together high-agency researchers in a long-lived community that continues to percolate afterward.</p><p style="text-align: justify;">Importantly, this model handles aspects that traditional funding mechanisms handle poorly: e.g., 1) rapid landscape mapping (170 projects surveying possibilities), 2) real-time workforce development via participants learning by building and forming lasting collaborations, 3) public-by-default outputs where every project is immediately available for others to learn from and build on, 4) and talent identification where two days of effort can produce a significant addition to the researcher&#8217;s CV.</p><p style="text-align: justify;">The projects and community described here focused on materials science and chemistry, but illustrate a broader pattern about how AI capabilities spread through research communities. Properly structured hackathons may function as adoption accelerators, with the ability to compress the learning curve, give researchers permission to experiment (and fail), and produce reusable examples and visible proof that the tools work.</p><p style="text-align: justify;">These events showed not just the diffusion, but the building of capability, showing how a capable workforce, able to effectively leverage AI can be built rapidly. The hackathons were views into the diffusion and capacity building happening in real time, suggesting that research institutions, national labs, and professional societies could drive meaningful AI adoption and diffusion of capabilities and knowledge by running similar events in their own domains. The model is lightweight, replicable, and documented. However, there are still areas for improvement, e.g., sustaining momentum after the event ends, securing compute credits and inference access for deeper development, and bridging the gap between a promising hackathon prototype and a tool that researchers use daily. These are tractable problems, but they require intentional investment in the community infrastructure, scientific middleware, and shared compute to further build researcher capacity and enable diffusion of tools into production use.</p><div><hr></div><p style="text-align: justify;">One important point sticks with me and fills me with hope. Many of the teams that participated, including those that built reasoning agents connected to simulation tools and external databases, autonomous NMR analysis pipelines, and natural language interfaces for software and hardware <em>had never worked with LLMs or constructed an agentic system before</em>. Yet, with a clear goal, and aligned incentives, they accomplished it together in just over a day. That fact alone should change our priors on the near-term trajectory of scientific research, even if these participants are perhaps exceptional due to selection biases.</p><p style="text-align: justify;">Scientific tools often diffuse slowly through research communities, often taking years between invention and widespread use. The 170 projects built at the events are prototypes that help provide a concrete starting point to speed the diffusion. Yet, there is real work between a hackathon demo and a tool used daily by thousands of researchers that requires additional work and investment. From these events, researchers have catalyzed new collaborations, presented results at top international conferences, secured new funding, created teaching modules, and created software used by many other research groups. All of this, paired with the scale and breadth of  working demonstrations show how much is now possible promising to draw more people in to build the next set.</p><p style="text-align: justify;">This diffusion also requires building researcher expertise and familiarity. The hackathon events showed that with even a modest incentive structure and a focused community-building effort, thousands of researchers with minimal initial LLM training can be trained via &#8220;learn-by-shipping&#8221;. In this model each team produces (1) a concept, (2) a software repository, (3) a demo artifact (e.g., video), and (4) a short explainer. Those outputs and the community itself act as a diffusion substrate creating code others can build upon, workflows others can copy, and explorations of spaces relevant to many different research groups. As such, the training does not stay local to participants. This creates a virtuous cycle where increased visibility recruits the next cohort, reuse of the examples turns prototypes into shared resources or production software, and where normalization lowers perceived risk and increases adoption inside labs.</p><p style="text-align: justify;">As these tools propagate, and e.g., the ability to run and interpret common techniques like XRD analysis, DFT, MD, molecular dynamics or leverage unique user facilities becomes conversational and ubiquitous, the barriers between disciplines become more porous with AI tools reducing transaction costs, e.g., meaning less time negotiating data formats, learning one-off software stacks, or translating jargon. Further, as data management overhead disappears, researchers can reclaim time for actual science and other research teams get access to better open data to use in their analyses. Towards training the next generation using AI, students can train on simulated characterization techniques expanding the researcher capacity dramatically. Breakthrough science has always thrived at the intersections between fields, institutions, capabilities, and expertise. In many ways, these projects are helping to create new intersections. AI is set to change dramatically as user expertise in AI tools diffuses, models improve, creating more of those intersections.</p><p style="text-align: justify;">None of these tools or approaches are guaranteed to become widely adopted. The new technology creates an opening, but realizing the benefits depends on whether the research community and the institutions and agencies that support the research ecosystem lean into the opportunity. The experiences documented here suggest that relatively modest investments in inference infrastructure (or cloud credits), scientific middleware, documentation standards, and community coordination could yield outsized returns.</p><p style="text-align: justify;">If you are an interested scientist, you can join the 1400+ researchers continuing to build in this space on the <a href="https://llmhackathon.github.io">hackathon slack</a>. Reach out to me if you&#8217;d like to sponsor this work, visit our community, and stay tuned for our next events!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.diffuseai.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Diffuse AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI Diffusion into the Feed]]></title><description><![CDATA[AI-powered Chinese Mini-Dramas and the Attention Economy]]></description><link>https://www.diffuseai.pub/p/ai-diffusion-into-the-feed</link><guid isPermaLink="false">https://www.diffuseai.pub/p/ai-diffusion-into-the-feed</guid><dc:creator><![CDATA[Grace Shao]]></dc:creator><pubDate>Tue, 14 Apr 2026 13:20:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Do3b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a2f5f4f-d712-4077-9380-84c258b616fb_1600x1016.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Over the Lunar New Year break, the parent company of TikTok, ByteDance, released its latest text-to-image and text-to-video model and app called Seedance 2.0. Its debut was so good it <a href="https://www.bbc.com/news/articles/ckg1dl410q9o">sent shockwaves across Hollywood</a> and sparked <a href="https://www.nytimes.com/2026/02/16/movies/tom-cruise-brad-pitt-artificial-intelligence-seedance.html">deep anxiety</a> over the fast-evolving capabilities of AI. And soon after its release, movie studios such as <a href="https://edition.cnn.com/2026/02/20/china/china-ai-seedance-intl-hnk-dst">Paramount and Disney sent angry letters to ByteDance </a>for copyright infringement concerns.</p><p>Although the primary commercial potential of these new Chinese models is not yet in major large screen production, as Hollywood most fears, it is being widely adopted by a new media channel that China dominates: microdramas.</p><p><a href="https://www.technologyreview.com/2024/02/27/1088980/chinese-short-drama-tiktok-flextv/">The MIT Tech Review</a> last year reported on the rise of micro-drama exports from China last year. Microdramas are smartphone-native, have a fast turnaround, and are directly competitive for viewers&#8217; attention and time. They are not &#8220;shorter TV shows,&#8221; they are a new vertical of their own.</p><p>These micro-dramas exploded in part due to timing: the phone became the living room, distribution became adept at buying attention, and production pipelines learned to optimize for retention at scale.</p><p>And because people don&#8217;t choose what to watch with their thumbs at 11:30 pm &#8212; they often turn on whatever is frictionless and emotionally legible in a phone-sized frame.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Do3b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a2f5f4f-d712-4077-9380-84c258b616fb_1600x1016.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Do3b!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a2f5f4f-d712-4077-9380-84c258b616fb_1600x1016.png 424w, https://substackcdn.com/image/fetch/$s_!Do3b!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a2f5f4f-d712-4077-9380-84c258b616fb_1600x1016.png 848w, https://substackcdn.com/image/fetch/$s_!Do3b!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a2f5f4f-d712-4077-9380-84c258b616fb_1600x1016.png 1272w, https://substackcdn.com/image/fetch/$s_!Do3b!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a2f5f4f-d712-4077-9380-84c258b616fb_1600x1016.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Do3b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a2f5f4f-d712-4077-9380-84c258b616fb_1600x1016.png" width="1456" height="925" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7a2f5f4f-d712-4077-9380-84c258b616fb_1600x1016.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:925,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Do3b!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a2f5f4f-d712-4077-9380-84c258b616fb_1600x1016.png 424w, https://substackcdn.com/image/fetch/$s_!Do3b!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a2f5f4f-d712-4077-9380-84c258b616fb_1600x1016.png 848w, https://substackcdn.com/image/fetch/$s_!Do3b!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a2f5f4f-d712-4077-9380-84c258b616fb_1600x1016.png 1272w, https://substackcdn.com/image/fetch/$s_!Do3b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a2f5f4f-d712-4077-9380-84c258b616fb_1600x1016.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>According to DataEye, a Chinese media analytics firm, over 300 million downloads of Chinese micro-drama apps globally pushed YouTube and Netflix off the top of app store rankings in the first half of 2025. Some of the most notable micro-drama Chinese platform names include ReelShort, DramaBox, GoodShort, and ShortMax. Most of the parent companies behind these platforms produce content in Chinese and English as they distribute across different social media platforms aimed at different audience demographics. But ReelsShort, ShortMax, and so focuses on the English-native market.</p><p>Similar to American short-form native platform Quibi, the leading Chinese micro-drama company Reelshort also targets mobile-first, short-form, vertical-video entertainment, but the main difference in their strategies lies in their approach. While Quibi failed with high-budget, A-list content, ReelShort thrived on ultra-low-budget, addictive, soapy, &#8220;guilty pleasure&#8221; dramas.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!foDT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9733a1f2-cff0-420c-8de6-f91b8487772b_1600x801.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!foDT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9733a1f2-cff0-420c-8de6-f91b8487772b_1600x801.png 424w, https://substackcdn.com/image/fetch/$s_!foDT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9733a1f2-cff0-420c-8de6-f91b8487772b_1600x801.png 848w, https://substackcdn.com/image/fetch/$s_!foDT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9733a1f2-cff0-420c-8de6-f91b8487772b_1600x801.png 1272w, https://substackcdn.com/image/fetch/$s_!foDT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9733a1f2-cff0-420c-8de6-f91b8487772b_1600x801.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!foDT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9733a1f2-cff0-420c-8de6-f91b8487772b_1600x801.png" width="1456" height="729" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9733a1f2-cff0-420c-8de6-f91b8487772b_1600x801.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:729,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!foDT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9733a1f2-cff0-420c-8de6-f91b8487772b_1600x801.png 424w, https://substackcdn.com/image/fetch/$s_!foDT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9733a1f2-cff0-420c-8de6-f91b8487772b_1600x801.png 848w, https://substackcdn.com/image/fetch/$s_!foDT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9733a1f2-cff0-420c-8de6-f91b8487772b_1600x801.png 1272w, https://substackcdn.com/image/fetch/$s_!foDT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9733a1f2-cff0-420c-8de6-f91b8487772b_1600x801.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>For the sake of research, I watched a few episodes and realized its addictive nature. The themes read like a speedrun of narrative dopamine: car crash amnesia, comeback revenge arcs, double lives, fake marriages, cheating spouses. Honestly, the more dramatic and emotionally triggering, the more engaging. They&#8217;re like the less polished, more spiteful cousins of K-drama tropes, and they remind me of the telenovelas that used to play in the background at my friend&#8217;s house. Each episode was a wild ride; it gave you dopamine, cortisol, oxytocin, adrenaline, all in 120 seconds of your life. Each episode feels short, but follow along for a whole storyline, and you&#8217;ve burned through an hour without noticing. Reelshort&#8217;s slogan &#8220;every second is drama&#8221; is quite literal in that time is attention and attention is the business.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.diffuseai.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Diffuse AI! Subscribe to get future pieces in your inbox.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>And just when you&#8217;re super amped up and hooked, it displays a black screen to unlock the next episode, pay a few coins, or watch an ad. Micro-dramas are not just shorter TV series. The content business now looks more like the economics of gaming. As you&#8217;re already hooked, you are presented with two options to continue your show: pay upfront, or watch an ad to unlock the next episode.</p><h2>The Business of Microdramas</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tRUx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce793802-99e7-4e42-8ee9-c081e9ace8d2_1600x900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tRUx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce793802-99e7-4e42-8ee9-c081e9ace8d2_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!tRUx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce793802-99e7-4e42-8ee9-c081e9ace8d2_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!tRUx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce793802-99e7-4e42-8ee9-c081e9ace8d2_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!tRUx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce793802-99e7-4e42-8ee9-c081e9ace8d2_1600x900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tRUx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce793802-99e7-4e42-8ee9-c081e9ace8d2_1600x900.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ce793802-99e7-4e42-8ee9-c081e9ace8d2_1600x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tRUx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce793802-99e7-4e42-8ee9-c081e9ace8d2_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!tRUx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce793802-99e7-4e42-8ee9-c081e9ace8d2_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!tRUx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce793802-99e7-4e42-8ee9-c081e9ace8d2_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!tRUx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce793802-99e7-4e42-8ee9-c081e9ace8d2_1600x900.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Microdramas are both an advertising business and a subscription business. And it isn&#8217;t a small business. <a href="https://m.thepaper.cn/newsDetail_forward_32417505">DataEye estimates </a>that China&#8217;s microdramas + &#8220;manju&#8221; (AI/animated short dramas) generated over RMB 100 billion in annual output value&#8212;well above the earlier market expectation of ~RMB 60 billion. At that scale, the microdrama industry is 2&#215; China&#8217;s domestic film box office annual revenue of RMB 51.832 billion.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qFVC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70af8947-4fb6-4c59-8983-796d97209bbd_919x467.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qFVC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70af8947-4fb6-4c59-8983-796d97209bbd_919x467.png 424w, https://substackcdn.com/image/fetch/$s_!qFVC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70af8947-4fb6-4c59-8983-796d97209bbd_919x467.png 848w, https://substackcdn.com/image/fetch/$s_!qFVC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70af8947-4fb6-4c59-8983-796d97209bbd_919x467.png 1272w, https://substackcdn.com/image/fetch/$s_!qFVC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70af8947-4fb6-4c59-8983-796d97209bbd_919x467.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qFVC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70af8947-4fb6-4c59-8983-796d97209bbd_919x467.png" width="919" height="467" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/70af8947-4fb6-4c59-8983-796d97209bbd_919x467.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:467,&quot;width&quot;:919,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qFVC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70af8947-4fb6-4c59-8983-796d97209bbd_919x467.png 424w, https://substackcdn.com/image/fetch/$s_!qFVC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70af8947-4fb6-4c59-8983-796d97209bbd_919x467.png 848w, https://substackcdn.com/image/fetch/$s_!qFVC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70af8947-4fb6-4c59-8983-796d97209bbd_919x467.png 1272w, https://substackcdn.com/image/fetch/$s_!qFVC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70af8947-4fb6-4c59-8983-796d97209bbd_919x467.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Chart: <a href="https://www.hollywoodreporter.com/business/business-news/microdrama-series-verticals-production-1236418912/">The Hollywood Reporter</a></p><p>According to a Chinese business publication, The Paper&#8217;s report, Chinese micro-dramas have reached 200+ countries and regions. In 2025, Guangda Securities estimates that the overseas market generated ~US$2.38B in revenue on 1.21B downloads, with revenue growth accelerating to +263% year-on-year and download growth to +135% year-on-year. As of now, North America remains the main profit pool, contributing over 60% of overseas revenue for these companies, while Southeast Asia leads in share at ~35%, with the Middle East, Japan, Korea, and Europe also cited as expanding rapidly.</p><p>Amongst the top platforms by revenue are ReelShort, DramaBox, and GoodShort. The three platforms together account for over 53% of the market.</p><h2>Embracing AI</h2><p>Today, companies like ReelShort still mostly rely on Western actors when making content for Western audiences. But the next step, which many have told investors they plan to take, is to use AI to create characters to perform these manuscripts, thereby significantly reducing production and export costs and reshaping the economics of microdramas.</p><p>The most recent generation of Chinese video models certainly seems capable of making this leap. The internet has already been thrown into a frenzy as snippets of <a href="https://x.com/zhao_dashuai/status/2020528048341217592">short</a> period dramas leaked ahead of Seedance&#8217;s latest update announcements, as well as new <a href="https://x.com/pjaccetturo/status/2019072637192843463?s=46&amp;t=U77RY0EbcBG0KvZxuDR5hw">action films</a> created completely on Kling. <a href="https://x.com/kenw_2/status/2018987162365010215?s=46&amp;t=U77RY0EbcBG0KvZxuDR5hw">Tutorials on how to turn written scripts into videos</a> using these tools have also flooded X. Hugging Face&#8217;s head of the APAC ecosystem, Tiezhen Wang, is also <a href="https://x.com/Xianbao_QIAN/status/2021356619624481039?s=20">sharing videos</a> that resemble scenes from a Hollywood blockbuster created by Seedance 2.0. There was  even a dedicated AI Film Festival hosted in India by <a href="https://x.com/beginnersblog1/status/2020099935572795505?s=46&amp;t=U77RY0EbcBG0KvZxuDR5hw">InVideo</a> as part of the broader India AI Impact Summit, which included PM Modi, President Macron, Jensen Huang, Sam Altman, and Sundar Pichai in attendance. Of course, AI sometimes still produces horrendous glitches. However, some argue that whether the protagonist was wearing the same t-shirt in two consecutive scenes doesn&#8217;t affect the experience of a one-minute short meant to be watched on a 1080p phone screen.</p><p>And this is no longer just hypothetical. In mid-March, <a href="https://finance.sina.com.cn/tob/2026-03-19/doc-inhrpfxk4031114.shtml">ByteDance announced the release of &#23567;&#20113;&#38592; &#8216;little sparrow, </a>an AI-native drama platform. It accepts scripts up to 100,000 characters and is described as the first industry agent powered by Seedance 2.0.<a href="https://www.cls.cn/detail/2286046"> Industry leaders have reflected on how Seedance 2.0 has truly solved three previously perceived pain points for AI short-form drama: </a>character/scene consistency, realism in complex physical motion, and continuity/rationality of camera movement.</p><p>In an interview with Chinese tech media Jiemian, Chinese filmmaker Gong Changhu told <em>Jiemian</em> that before Seedance 2.0 his 10-person team could produce a 120-minute short drama in 20 days; after Seedance 2.0, that time dropped to half, showcasing how the launch of such <a href="https://m.jiemian.com/article/14183430.html">AI models could materially raise production efficiency and could trigger explosive industry growth.</a></p><p>A recent Kuiashou Kling press release stated that, since its launch in June 2024, their video generation models have served 60 million creators worldwide and produced over 600 million pieces of content. The press release frames Kling 3.0 as a shift from a &#8220;generation tool&#8221; to an &#8220;intelligent creative partner&#8221; that can grasp artistic intent and turn ideas into reality, effectively positioning it as an era in which anyone can turn ideas into films. AI is being sold less as a tool and more as something that takes intent and executes, a true partner in production.</p><h2>What will AI-empowered micro dramas look like?</h2><p>Microdrama exporters continue to rapidly capture traffic, scale, and distribution, both in China and abroad. The &#8220;second half&#8221; of the battle is the efficiency of AI adoption and the ecosystem flywheel effect. For companies like ReelShort, with a huge existing MAU base, the thinking is that once they can cut production costs, they can push out cheaply made content through existing distribution and monetize.</p><p>The likely new business model for AI-empowered micro-dramas will be built on a legacy supply chain. It&#8217;s about content ownership, character and language localization, lowering expectations, and using AI to cut costs and increase efficiency across the whole stack.</p><p>Let&#8217;s break down the layers of the ecosystem:</p><p><strong>Layer 1: scale of writers.</strong> The massive popularity of web novels in China since the early 2000s has created a generation of writers trained to maximize retention at all costs: cliffhangers, rapid reversals, and emotional triggers engineered to drive binge behavior. Web novels, like micro dramas, typically lure readers in with a few free chapters before paywalling subsequent releases. Also, like micro dramas, the market is brutally competitive, with hundreds of thousands of active novels competing for readers at any given time. And the universally loved but loathed themes are all the same: Fifty Shades of Grey style CEO-turned-boyfriend, rags to riches, and Cinderella tales.</p><p>The Chinese digital literature ecosystem is a large commercial market; <strong>the latest official reports estimated that the number of online literature readers was ~575 million as of 2024 and that industry revenues were tens<a href="https://english.news.cn/20250401/528de0a09d4948de86df7b9787c6ca79/c.html?utm_source=chatgpt.com"> of billions of RMB. </a> </strong>This digital literature space encompasses hundreds of thousands of writers. And that huge group of online novel writers has been <a href="https://www.wenweipo.com/a/202602/02/AP697fb227e4b04d7d56d15d14.html">pivoting to write micro-drama screenplays in swarms since 2023</a> as the vertical offers more monetary reward.</p><p><strong>Layer 2: IP owners and product creators. </strong>The second layer of the supply chain comprises IP owners and product creators. Companies like ReelShort aren&#8217;t just microdrama producers. They are part of a larger ecosystem. The studio behind Reelshort is Crazy Maple Studio, which is 49% owned by COL Group (&#20013;&#25991;&#22312;&#32447;), a Chinese content company with an enormous inventory of stories ready to be adapted. And they work with writers in both forms: employees for the companies, as well as licensing/buying out webnovel IPs.</p><p>For context, <a href="https://www.col.com/">COL was founded in 2000 and is listed on the Shenzhen Stock Exchange</a> (stock code 300364). It<strong> is said to have more than 5.6 million pieces of digital content and more than 4.5 million digital-native authors.</strong> Its products include online books, movies, audiobooks, music, and more. The sheer scale of its IP ownership is an advantage of its own.</p><p><strong>Layer 3: tools. </strong>The third layer comprises the tools. Chinese short-video platforms have positioned themselves to capture this market, leveraging their in-house data, such as ByteDance&#8217;s Seedance model and Kuaishou&#8217;s Kling. While both platforms have massive monthly user bases, the vast amount of video and image data they have for training has given them an advantage. Thus, their open-source/open-weight text-to-image models have become competitive with Western industry leaders, such as Veo, Sora, and Midjourney. Not only that, but tools such as Kling and Seedance are largely free for short-form video creation.</p><p>Kuaishou&#8217;s Kling is targeting independent filmmakers and indie studios to empower them with AI. They&#8217;ve also been partnering up with film festivals across Asia, from Hong Kong to Tokyo. ByteDance&#8217;s video generator is currently widely available across its AI apps, such as Douban, Jimeng, and Jianying/CapCut, and all at no cost.</p><p><strong>Layer 4: distribution. </strong>And the final layer is the distribution. Although companies like Reelshort are still largely loss-making, their goal is to blitzscale distribution and then move to profitability. On the first goal, they are doing tremendously well: Reelshort is one of the top apps in the App Store entertainment category, directly competing with top-tier streaming apps for downloads. The logic of Reelshorts is to own distribution, reduce production costs through AI video generation apps, and then reach profitability.</p><p>Quibi demonstrated that cramming prestige TV Hollywood economics into short-form doesn&#8217;t work &#8212; but ReelShort&#8217;s bet is the inverse: own distribution, drive production costs toward zero with AI, and let one breakout hit cover the rest.</p><p>An article in the Hong Kong newspaper Ta Kung Pao offered a case study that reads like a DTC marketer&#8217;s dream: a micro-drama titled The Divorced Billionaire Heiress, reportedly costing under $200k and generating $35 million at the North American box office &#8212; 170&#215; returns. Even if you treat that as an extreme example rather than the median, it explains why capital keeps wandering into this &#8220;unsexy&#8221; corner of entertainment. With AI video generation lowering per-unit production costs, Chinese microdrama platforms have the potential to be able to leap abroad into Western markets.</p><h2><strong>AI Micro-dramas, a New Soft Power</strong></h2><p>Television reshaped the movie industry: stories had to move from big screens to the little boxes in our homes, and with this new medium came the rise of the TV series, which challenged the two-hour movie formula. The internet and the rise of streaming changed how we consume those series. Now, micro-dramas are a new business model, natively built for the mobile era. While still niche, they represent a growing source of competition with mainstream streaming services for eyeballs.</p><p>While Western studios are still worried about AI taking actors&#8217; or writers&#8217; jobs, Chinese studios are building new micro-drama platforms that are uniquely positioned to leverage AI tools to more effectively capture user attention in the global attention economy. AI-powered microdramas represent a uniquely Chinese vertical, from AI models to distribution platforms, and potentially present a new form of soft power export, following the footsteps of Labubus.</p>]]></content:encoded></item><item><title><![CDATA[Diffuse AI: Issue 1]]></title><description><![CDATA[AI-powered microdramas, the bipartisan imperative for AI diffusion, and more]]></description><link>https://www.diffuseai.pub/p/diffuse-ai-issue-1</link><guid isPermaLink="false">https://www.diffuseai.pub/p/diffuse-ai-issue-1</guid><dc:creator><![CDATA[Charles Yang]]></dc:creator><pubDate>Tue, 07 Apr 2026 15:15:34 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4db229be-9789-4d3a-a23e-272579755065_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Friends,</p><p>We are excited to share that Issue 1 of Diffuse AI is now coming to an inbox near you. For our inaugural issue, we&#8217;re excited to have the following contributors:</p><ul><li><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Grace Shao&quot;,&quot;id&quot;:878147,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!44Sc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cdde595-f989-4e2f-a7dc-a73ce0e036ec_2604x2604.jpeg&quot;,&quot;uuid&quot;:&quot;0e4565cd-837e-470e-bf51-cec3cb1b9161&quot;}" data-component-name="MentionToDOM"></span> &#8211; Diffusion into the Feed: Mini-Dramas, AI-Native Entertainment, and the Attention Economy</p></li><li><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Ben Blaiszik&quot;,&quot;id&quot;:98630176,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/63fedf54-eded-4bb3-b36d-4662be0521d1_144x144.png&quot;,&quot;uuid&quot;:&quot;ffc37e5d-8a06-4e92-bad2-dcdd0c2af217&quot;}" data-component-name="MentionToDOM"></span> &#8211; Field notes from LLM hackathons for chemistry and materials</p></li><li><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Lesley Gao&quot;,&quot;id&quot;:1286302,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/018c997a-158f-449e-8756-1056df9d3dc0_403x403.jpeg&quot;,&quot;uuid&quot;:&quot;c80cf911-6ea0-4ef5-98a5-62f67a1664b7&quot;}" data-component-name="MentionToDOM"></span> &#8211; Why AI hasn&#8217;t reached the manufacturing floor</p></li><li><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Sean A. Harrington&quot;,&quot;id&quot;:52338373,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ebe82322-c157-48f6-b554-4859920b4281_2667x2667.jpeg&quot;,&quot;uuid&quot;:&quot;e9dd5fcf-2b47-40bc-8cee-36a8a2115813&quot;}" data-component-name="MentionToDOM"></span> &#8211; The Structural Barriers to AI Lawyers</p></li><li><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Dean W. Ball&quot;,&quot;id&quot;:5925551,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!mLaj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49371abf-2579-47be-8114-3e0ca580af8b_1024x1024.png&quot;,&quot;uuid&quot;:&quot;9f9e2aa4-d4f5-454d-b487-cafa798c6b92&quot;}" data-component-name="MentionToDOM"></span> and <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Nik Marda&quot;,&quot;id&quot;:251439922,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a77c07bb-56b3-4adb-851e-422ea9ecbae1_144x144.png&quot;,&quot;uuid&quot;:&quot;38f12977-07ef-4e47-8f2c-157417a115e4&quot;}" data-component-name="MentionToDOM"></span> &#8211; On the National Imperative for AI Diffusion, a joint interview with <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Charles Yang&quot;,&quot;id&quot;:867402,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42a8ebb3-1804-4d14-8565-221327d53a37_3603x2829.jpeg&quot;,&quot;uuid&quot;:&quot;4d473bd4-ec14-421b-a663-44ae4cbeb547&quot;}" data-component-name="MentionToDOM"></span> </p></li><li><p>Anonymous &#8211; <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Clara Collier&quot;,&quot;id&quot;:112098527,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66cb9bf2-720f-4c2c-8267-5beb61cd2465_144x144.png&quot;,&quot;uuid&quot;:&quot;451b8df4-d09c-4683-84bf-be8746e4b638&quot;}" data-component-name="MentionToDOM"></span> interviews a macro analyst on the impacts of AI diffusion for India&#8217;s economic development</p></li></ul><p>We are aiming to ship published pieces every other week, alongside monthly round-ups of blogs, pieces, and tweets on AI diffusion. Send us your best takes!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.diffuseai.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Diffuse AI! Subscribe to get Issue 1 pieces in your inbox.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>And as always &#8212; we are always searching for in-depth, on-the-ground pieces about how AI is diffusing and shaping different parts of the economy. Pitch us through <a href="https://docs.google.com/forms/d/e/1FAIpQLSeVCkaczhPxvzXw84m5gbrpL3Q7LnklRStCfOPMgmNSWV72sw/viewform?usp=dialog">this form</a>. We pay $1k USD for essays and reportage.</p>]]></content:encoded></item><item><title><![CDATA[Announcing Diffuse AI]]></title><description><![CDATA[And a Call for Contributors]]></description><link>https://www.diffuseai.pub/p/announcing-diffuse-ai</link><guid isPermaLink="false">https://www.diffuseai.pub/p/announcing-diffuse-ai</guid><dc:creator><![CDATA[Charles Yang]]></dc:creator><pubDate>Tue, 09 Dec 2025 14:31:51 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ff3eadc4-879d-42a5-bc1c-cb8a7351a8ab_3873x2619.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AI is eating the US economy. It&#8217;s a bubble. It&#8217;s normal technology. It&#8217;s going to be the biggest thing since the internet, or electricity, or fire. It&#8217;s slop. It&#8217;s God. It&#8217;s plateauing. It&#8217;s going to replace us all.</p><p>Everyone wants to know what the next few years of AI will look like. We have a different question: what is AI capable of right now?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.diffuseai.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Diffuse AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Whether you&#8217;re worried about unemployment, deskilling, or human extinction &#8211; and even if you think AI is a flash in the pan &#8211; we all have a shared interest in understanding how it is already changing our world. But static benchmarks &#8212; the multiple choice questions or verifiable math and coding challenges typically used to evaluate models &#8212; are getting saturated as quickly as we can build them. The real test is how well AIs function in complex, context-heavy, dynamic environments &#8212; in other words, the real world.</p><p>What do we know so far? We&#8217;re starting to see serious interest in how AI is playing out across the economy, but the picture is messy. We&#8217;ve all seen the buzzy paper from <a href="https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf">Stanford arguing that AI is currently causing jobs to plummet among junior workers in &#8220;highly exposed industries&#8221;</a> and the <a href="https://budgetlab.yale.edu/research/evaluating-impact-ai-labor-market-current-state-affairs">equally buzzy paper from Yale claiming there&#8217;s no discernable effect</a>. Labs are investing in more sophisticated measures of how their models perform on the kinds of tasks that matter for real jobs, but there&#8217;s a big difference between a self-contained evaluation and the messy reality of day-to-day employment. And for every report on the wonders of vibe-coding, there&#8217;s a thread on hacker news insisting that AI productivity gains are a mirage. For every $30 million AI for science startup, there&#8217;s a grizzled computational biologist who&#8217;s ready to deflate the hype. What&#8217;s really going on? What are these systems actually capable of? What are the bottlenecks to realizing AI-assisted productivity gains? What does this all look like on the ground?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aIlJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b62ae7b-83bf-46a1-a2d5-edb3012089bf_4329x1329.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aIlJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b62ae7b-83bf-46a1-a2d5-edb3012089bf_4329x1329.png 424w, https://substackcdn.com/image/fetch/$s_!aIlJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b62ae7b-83bf-46a1-a2d5-edb3012089bf_4329x1329.png 848w, https://substackcdn.com/image/fetch/$s_!aIlJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b62ae7b-83bf-46a1-a2d5-edb3012089bf_4329x1329.png 1272w, https://substackcdn.com/image/fetch/$s_!aIlJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b62ae7b-83bf-46a1-a2d5-edb3012089bf_4329x1329.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aIlJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b62ae7b-83bf-46a1-a2d5-edb3012089bf_4329x1329.png" width="1456" height="447" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5b62ae7b-83bf-46a1-a2d5-edb3012089bf_4329x1329.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:447,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1709980,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.diffuseai.pub/i/180037363?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b62ae7b-83bf-46a1-a2d5-edb3012089bf_4329x1329.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aIlJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b62ae7b-83bf-46a1-a2d5-edb3012089bf_4329x1329.png 424w, https://substackcdn.com/image/fetch/$s_!aIlJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b62ae7b-83bf-46a1-a2d5-edb3012089bf_4329x1329.png 848w, https://substackcdn.com/image/fetch/$s_!aIlJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b62ae7b-83bf-46a1-a2d5-edb3012089bf_4329x1329.png 1272w, https://substackcdn.com/image/fetch/$s_!aIlJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b62ae7b-83bf-46a1-a2d5-edb3012089bf_4329x1329.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Diffuse AI is going to help answer all these questions, in as much nitty gritty qualitative detail as possible. We want:</p><ul><li><p>In-depth case studies of how AI is playing out in your industry &#8212; the more specific, the better.</p></li><li><p>Interviews with experts about what current models are and aren&#8217;t useful for in their work.</p></li><li><p>Stories of similar instances of historical tech diffusion.</p></li><li><p>Thoughtful discussions of the strengths, weaknesses, and methodologies of economic impact benchmarks like <a href="https://openai.com/index/gdpval/">GDPeval</a> or the <a href="https://www.anthropic.com/economic-index">Anthropic Economic Index</a>.</p></li></ul><p>We don&#8217;t want:</p><ul><li><p>Predicting the future.</p></li><li><p>Theorizing from first principles.</p></li><li><p>Coming with an axe to grind.</p></li></ul><p>Here are some examples of pieces we&#8217;d love to have published:</p><ul><li><p><a href="https://www.worksinprogress.news/p/why-ai-isnt-replacing-radiologists">Why AI isn&#8217;t replacing radiologists</a></p></li><li><p><a href="https://www.newsroomrobots.com/p/how-a-five-person-ai-team-is-powering">An interview with the AI Initiatives team at The New York Times</a></p></li><li><p><a href="https://secondthoughts.ai/p/first-they-came-for-the-software">25 interviews with software engineers on how they use AI</a></p></li><li><p>How Deepseek is actually diffusing in China: <a href="https://chinai.substack.com/p/chinai-321-deepseek-spreads-across?utm_source=post-email-title&amp;publication_id=2660&amp;post_id=168739110&amp;utm_campaign=email-post-title&amp;isFreemail=true&amp;r=75jj6&amp;triedRedirect=true&amp;utm_medium=email">Shallow, Narrow, and Slow</a></p></li><li><p>A case study on the <a href="https://arxiv.org/abs/2506.21816">first Compute Arms Race: weather forecasting supercomputer</a></p></li></ul><p>Pitch us through <a href="https://docs.google.com/forms/d/e/1FAIpQLSeVCkaczhPxvzXw84m5gbrpL3Q7LnklRStCfOPMgmNSWV72sw/viewform?usp=dialog">this form</a>, we pay $1k USD for essays and reportage. Help us figure out what&#8217;s really going on.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.diffuseai.pub/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Diffuse AI! Subscribe for free to receive new posts</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>