Tech Sharing:
Context, Constraints & Creativity: Rethinking Human-AI Workflows
In an era where AI is increasingly embedded in how we think, work, and create, the way we collaborate with machines is evolving. This session explores how context, constraints, and creativity come together to shape the future of human-AI workflows.
Shaohuan Li (Shao)
July 2025, Singapore
Outline
  1. About me
  1. How LLMs work
  1. How reasoning model work
  1. How the human brain reasons
  1. The Current Stage of AI
  1. Limitations of AI
  1. Context Engineering and AI agents
  1. Human + AI: Solving Complexity Together
  1. Real-World Example
About Me
Shaohuan Li(Shao)
李绍欢
LinkedIn
Tech Entrepreneur & Product Strategist
Building innovative solutions at the intersection of artificial intelligence and business applications
Founder Experience
Created and scaled Fooyo, focusing on enterprise solutions and data platforms.
AI Specialization
Dedicated to developing human-in-the-loop AI systems that enhance rather than replace human capabilities
Education:
NUS(National University of Singapore, Singapore) Computer Engineering(2013) - Specialised in data mining.
KTH(Royal Institute of Technology, Sweden) Data Science(2012) - Machine learning/nature language processing. Music data mining.
Working Experiences:
ViSenze(2014) Software Engineer- AI Image processing SaaS
Fooyo(2015-Now) Founder and CEO - Data Platforms, Smart Tourism, SaaS, AI data solutions. Helped 50+ companies create data platforms and create insights out of the platforms.
How Large Language Models (LLMs) Work
1. Supervised Fine-Tuning
2. Reward Model Creation
3. Reinforcement Learning (with PPO)
References: https://openai.com/index/chatgpt/
How Reasoning Models Work
Chain-of-Thought (CoT)
Prompting a model to produce step-by-step reasoning paths
  • Breaks down multi-step questions into intermediate logical steps
  • Enables solving complex problems previously unsolvable
  • Benefits emerge in sufficiently large models (100B+ parameters)
References: Jason Wei and Denny Zhou, Research Scientists, Google Research, Brain team https://research.google/blog/language-models-perform-reasoning-via-chain-of-thought/
Stanford CS25: V5 I Large Language Model Reasoning, Denny Zhou of Google Deepmind https://www.youtube.com/watch?app=desktop&v=ebnX5Ur1hBk
How the Human Brain Reasons – Logic, Bias, and Context
1
Types of Reasoning
  • Deductive: From general premises to certain conclusions
  • Inductive: Generalizing from specific examples to probable conclusions
  • Abductive: Inference to the best explanation
Effective human reasoning employs a mix of these approaches, combining strict logical steps with probabilistic judgment.
2
Context Dependency
Human reasoning is heavily influenced by context, not purely logical.
  • What counts as "convincing" depends on context and accepted premises
  • Arguments sound in one context may not be in another
  • We continuously adjust to background knowledge, audience, and situation
3
Cognitive Biases
Systematic effects on human reasoning:
  • Confirmation bias: favoring information that confirms existing beliefs
  • More critical of others' arguments than our own
  • Deliberative thinking is energy-intensive for the brain
  • We often default to easier automatic thinking
How the Human Brain Reasons
System 1 vs. System 2 Thinking
Neuroscientifically, deliberate reasoning activates prefrontal brain circuits and consumes significant energy, while intuitive judgments rely on quicker, low-effort pathways.
  • System 1: Fast, automatic, intuitive thinking
  • System 2: Slow, effortful, deliberate thinking
  • Critical thinking requires engaging System 2 deliberately
  • Important to surface assumptions and check biases
How the Human Brain Reasons – System 1 and System 2
System 1: Fast Thinking
  • Automatic and intuitive
  • Includes instincts, habits, gut reactions
  • Operates with little conscious effort
  • Low energy cost
  • Uses heuristic shortcuts
Brain Energy Use
  • ~2% of body weight
  • Consumes ~20% of energy
  • Deliberative thinking is highly energy-demanding
  • Mental fatigue is real
System 2: Slow Thinking
  • Effortful and deliberate
  • Used for complex problems and logical analysis
  • Requires concentration
  • Resource-intensive and relatively slow
  • Involves prefrontal cortex
The Balancing Act
  • Default to System 1 to conserve energy
  • Engage System 2 to override gut instincts when needed
  • Critical thinking requires surfacing assumptions
  • Good decision-makers know when to trust intuition vs. analysis
The Current Stage of AI – "The Era of Human Data"
Learning from Human Knowledge
AI has advanced by learning from human-generated data:
  • LLMs trained on text produced by humans
  • Reinforcement learning often uses human feedback
  • "The AI of the 2020s – learning from human data – is doing very well" - Richard Sutton
Fundamental Constraints
Training on static human datasets has drawbacks:
  • Data becomes outdated quickly
  • Encoded biases are learned and amplified
  • Quality human data is finite and costly
  • AI is capped by human knowledge
Approaching Data Saturation
We're hitting the limits of scaling up AI on internet-based human data:
  • By 2026-2030, we may exhaust all high-quality public text data
  • By 2028, largest AI models' training sets may equal entire stock of public internet text
  • Diminishing returns on model and data size increases
These limitations motivate the next stage of AI development, moving from the Era of Human Data to what some call the Era of Experience.
References: Professor Richard Sutton: the future of AI and reinforcement learning: https://www.youtube.com/watch?v=f9KDMFZqu_Y
Limitations of AI and the Need for Experience-Based Learning
Data Saturation Problem
Models like GPT-4 have essentially read a significant fraction of everything humanity has written online.
  • Adding more of the same data yields diminishing returns
  • By late 2020s, AI training may consume virtually all available public text
  • Models trained on static data struggle with knowledge cutoff
  • They inherit biases and errors from training corpora
The Experiential Learning Thesis
"Future AI needs a new source of data that grows and improves as the agent becomes stronger" - Rich Sutton
  • AI agents should learn by interacting with the world
  • Collect new data through their own actions
  • Continuously update understanding and correct mistakes
  • Learn through sensors or simulations in a goal-directed way
The field is pivoting toward approaches where AI learns from interactive experience to overcome data saturation and achieve more robust, adaptable intelligence.
Context Engineering and AI Agents
What is Context Engineering?
"The discipline of designing and building dynamic systems that provide the right information and tools, in the right format, at the right time, to give an LLM everything it needs to accomplish a task." - Phil Schmid
  • Carefully designing what information and tools an AI has access to
  • Curating the AI's "working context" to optimize performance
  • Building a contextual environment beyond simple prompting
Why It Matters
AI agents have limited "attention span" (context window)
  • Success often hinges on what's in that window
  • Many agent failures are context failures, not model failures
  • Rich context enables more useful and personalized responses
Context Components
  • System prompt with role instructions
  • Conversation history (memory)
  • Retrieved documents or knowledge base info
  • Tool or API definitions
Human + AI: Solving Complexity Together
Complex Problems
Business challenges require nuanced context understanding, emotional intelligence, and logical reasoning
AI Strengths
Computational scale, processing speed, and consistent structural frameworks
Human Strengths
Intent clarification, ethical judgment, and high-level conceptual abstraction
Future Workflow
Integrated systems combining AI reasoning engines with expert human guidance creating unprecedented intelligence capabilities
Real-World Example: GTM Strategy for a Chinese Product to Southeast Asia
Complex business challenges require both structured reasoning and market intuition
Objective
Launching a Chinese software product into SEA (e.g., Singapore, Malaysia, Indonesia)
Key Questions
  • Who are our early adopters in SEA?
  • What’s the pricing model and localization needed?
  • Which channels work best: B2B outreach, platform listings, or distributor partners?
Audience thinking
  • 🇸🇬 Singapore buyers: “Is this product trusted and enterprise-ready?”
  • 🇮🇩 Indonesian buyers: “Does it solve my local workflow?”
  • Chinese product team: “How do we adapt without losing core differentiation?”
  • Distributors/investors: “What is the local GTM cost vs. revenue?”
Real-World Example – Pitch for Market Entry Strategy
Human + AI Pitch Process
Research (AI-driven)
Intelligent data gathering from market sources and company materials
Insight Generation (AI-driven)
Pattern recognition and narrative development from complex information
Slide Creation (AI-driven)
Automated visualization and content structuring based on insights
Feedback Loop (Human-in-the-loop)
Expert refinement and strategic direction
Persona-based Framing (AI-assisted)
Tailoring narratives to resonate with specific audiences
This example demonstrates how a reasoning AI agent can amplify human capabilities while humans ensure the AI doesn't go off-track and that the final output has a coherent vision and persuasive narrative.
Thank You!
Let's build better reasoning together.
Connect with me to explore partnership opportunities and join the journey toward a new era of human-AI collaboration.
Email: shaohuan.li@gmail.com | LinkedIn: Shaohuan Li
References
AI Models & Reasoning
  • IBM AI Knowledge Base on GPT models
  • Google Research on Chain-of-Thought
  • IBM documentation on ReAct Agents
  • Yao et al. on Tree-of-Thoughts framework
  • Villalobos et al. (2023) on data scaling limits
Human Reasoning
  • "Critical Thinking: A Brain-Based Guide for the ChatGPT Era" by Oakley, Sejnowski, and Trybus
  • Kahneman's work on System 1 and System 2 thinking
AI Development
  • Richard Sutton on "The Era of Human Data"
  • Philipp Schmid on Context Engineering
  • Fooyo AI and Notellect platforms
Key Concepts
  • Transformer architecture (Vaswani et al., 2017)
  • Reinforcement Learning from Human Feedback (RLHF)
  • Few-shot learning in large language models
  • Human-AI collaboration frameworks
  • Experience-based AI learning
This presentation has drawn on these diverse sources to provide a comprehensive view of how humans and AI can work together effectively in reasoning and decision-making processes.