Research Journal

Building in the Open

Our research journey, documented honestly. The challenges, insights, and gradual progress toward emotionally intelligent AI.

Updated as we learn
Transparent process

Where We Are Now

An honest look at our current research focus and the questions we're exploring.

Understanding Context

How can AI better understand the emotional context behind user interactions? We're exploring ways to make AI more empathetic and contextually aware.

User-Centric Design

Learning from real users of Lucerna AI to understand what truly helps people in their career journeys and personal growth.

Ethical Foundations

Building frameworks for responsible AI development that prioritizes user benefit and privacy from the ground up.

Research Insights

Key learnings from our journey so far—shared openly to contribute to the broader AI research community.

The Challenge of Authentic Empathy in AI
Ongoing Research
Core Insight

One of our biggest challenges is distinguishing between AI that appears empathetic and AI that genuinely understands emotional context. Through our work on Lucerna AI, we've learned that true emotional intelligence requires understanding not just what someone says, but the deeper context of their situation, goals, and emotional state.

This isn't just about sentiment analysis or pattern matching—it's about building systems that can hold space for human complexity. We're exploring how AI can recognize when someone is frustrated with their career not because they lack skills, but because they feel undervalued, or when career advice needs to account for personal circumstances beyond just professional qualifications.

This research is ongoing and forms the foundation of our approach to building Lspaces—an AI companion that doesn't just respond to queries, but truly understands the human behind them.

Learning from Real User Interactions
Continuous Learning
User Research

The most valuable insights come from observing how people actually interact with AI tools. Through Lucerna AI, we're learning that users don't just want better outputs—they want AI that understands their unique context and adapts to their communication style.

For example, we've noticed that career advice hits differently when it acknowledges someone's current emotional state. A person who's been job searching for months needs different support than someone exploring a career change from a position of stability. The same technical advice delivered with different emotional awareness can have completely different impacts.

This observation is shaping our research into memory systems and personalization—how can AI remember not just what you've discussed, but how you prefer to receive support and what your current emotional context might be?

The Ethics of AI Memory
Foundational Research
Ethics & Privacy

As we explore AI systems that remember and learn from interactions, we're grappling with fundamental questions about privacy, consent, and user agency. How much should AI remember? How do we ensure that memory serves the user's growth rather than creating dependency?

We're developing frameworks that put users in control of their AI's memory. This means not just allowing users to delete data, but helping them understand what their AI remembers and why. It also means designing memory systems that fade naturally—just like human memory—rather than storing everything forever.

This research is crucial for Lspaces, where the goal is to create AI companions that grow with users over time. The challenge is building systems that are both helpful and respectful of human autonomy.

Open Questions We're Exploring

The big questions that drive our research—areas where we're still learning and would welcome collaboration.

How do we measure genuine emotional understanding in AI?

Beyond sentiment analysis, what metrics can help us evaluate whether AI truly understands emotional context and responds appropriately?

What's the right balance between AI memory and user privacy?

How can AI systems remember enough to be helpful while respecting user autonomy and avoiding the creation of dependency?

How do we ensure AI remains beneficial as it becomes more sophisticated?

What safeguards and design principles can help us build AI that amplifies human potential rather than replacing human judgment?

Can AI learn to adapt its communication style to individual users?

How might AI systems learn not just what to say, but how to say it in a way that resonates with each person's unique communication preferences?

What role should AI play in human decision-making?

How can AI support human decision-making without overriding human agency or creating over-reliance on AI guidance?

How do we build AI that grows with users over time?

What architectures and approaches can enable AI to evolve alongside users, supporting their growth rather than keeping them static?

Follow Our Research Journey

Get updates when we publish new insights, research findings, or major milestones. No spam, just genuine progress updates.

We'll only email you when we have genuine research updates to share. Unsubscribe anytime.