CogniCode

Empower youth to protect mental freedom and shape ethical AI through education, advocacy, and innovative technology.

Join Our Workshops

MindShield Extension

Our browser extension helps people identify and resist manipulation techniques in digital content, fostering critical evaluation skills.

Learn More

Workshops

Interactive sessions for middle and elementary school students to develop critical thinking skills in an engaging environment. High Schoolers can lead programs!

View Workshops

Policy Research

Evidence-based policy briefs addressing AI and our society.

Read Briefs

MindShield Browser Extension

This AI-powered Chrome Extension detects manipulative design patterns and protects your cognitive freedom while browsing online. MindShield Focus Areas Education Over Restriction Empowers users to identify manipulation tactics across all websites. Real-Time Learning Encourages mindful navigation of challenging content while maintaining access to essential information and sites. Regaining Control By understanding mechanisms such as infinite scroll, phantom notifications, and emotional triggers, users reclaim the ability to choose how they interact with technology. Protecting Your Resources Awareness of these tactics enables protection of your time, mental energy, and decision-making autonomy. Make Informed Choices Recognizing manipulation patterns is the critical first step toward achieving digital freedom and intentional browsing. .

Content Analysis

Automatically detects potentially misleading content and provides educational context to help students make informed decisions.

Bias Detection

Identifies potential bias in news articles and opinion pieces, helping students recognize different perspectives.

Progress Tracking

Monitor your critical thinking development over time with detailed analytics and personalized feedback.

Download From Chrome Web Store

Download

Educational Workshops

CogniCode offers engaging workshops designed to build critical thinking and digital literacy skills. Our programs are tailored for different age groups and learning objectives.

🏫 Elementary School Program

Grades: 3-5

Duration: 60-minute interactive sessions

Focus:

    This interactive workshop uses games, stories, and group activities to help kids understand how AI can influence their thoughts and feelings — and how to stay mindful and in control online.

Format: Hands-on activities, group discussions, and real-world examples

Schedule Workshop

🏫 Middle School Program

Grades: 6-8

Duration: 60-minute interactive sessions

Focus:

    Students explore how AI systems work, how they shape attention and emotion, and how to push back through hands-on experiments, feed simulations, and youth-written tech policy proposals.

Format: Hands-on activities, group discussions, and real-world examples

Schedule Workshop

🎓 High School Ambassador Program

Grades: 9-12

Program Benefits:

    High school students can earn verified volunteer hours by leading workshops, mentoring younger peers, and helping build tools like MindShield or shape youth-driven tech policy initiatives.

Application Process: Send us an email!

Apply Now

Our Blog

Stay updated with the latest insights on AI ethics, digital literacy, and youth empowerment.

Love, Lies, and Language Models

August 3rd, 2025

Are AI Companions Rewiring What It Means to Be Human?

Read More

Originality in the Age of AI

May 15, 2025

Shivanshi Dutt discusses the changing meaning of originality in the era of generative AI and how the relationship between technology and human creativity might shift over time.

Read More

NYT Open Letter: To Big Tech

April 17, 2025

Top 25% of NYT Open Letter Competition Entries.

Read More
← Back to Blog

Originality in the Age of AI

We have entered an era that can produce chart-topping songs without human emotion but simply from algorithms, rendering impressive pieces of art indistinguishable from human-created work. In 2023, an AI-generated image called “Théâtre D’opéra Spatial” won a state-wide art competition in Colorado, sparking outrage and confusion on what counts as art. AI-composed music is also increasingly finding ways onto streaming platforms that millions can listen to. Successful start-ups like Udio and Suno are beginning to signify a greater shift in public sentiment about whether the creator makes art any more meaningful or not. Users of the Suno platform have artist pages with songs created with nothing more than sensible prompts. With no point of derivation, our conceptual understanding of authorship is becoming more difficult to define. This reality poses a difficult question for students and creators: what is creativity? While we approach the question of originality in an automated world from a futuristic perspective, generative AI is beginning to change our understanding and relationship with creativity as of right now. Hearing AI-generated music samples from Udio for the first time left me surprised as well as unsettled. Samples of Soul, New Folk, and Dad Rock sound eerily similar to songs playing on the radio, stopping not just at instrumentals but adding versatile and poetic lyricism and vocals. If someone can create a listen-worthy song by typing out a prompt and clicking “Generate”, where does human creativity fit into this new landscape? Contemporary generative AI models like GPT-4, DALL-E3 and specific audio models belonging to individual startups use techniques called “diffusion”, which allows the models to learn and train on complex patterns of millions of music samples. With no credit or acknowledgment of the authors of these music samples, these AI systems transform billions of creative works into patterns without consent or compensation to the original creators. The accessible nature of these music-generating tools raises the possibility of devaluing the labor and vision of human creators. Platforms like Udio and Suno market themselves as providing a place for unlimited creativity but don’t provide an alternative space to support human artistic innovation. In some ways, AI and human creators have similar paths that in the sense they synthesize and remix ideas. While humans absorb experiences and influences to express something, AI analyzes datasets and identifies patterns. Dr. Rebecca Saxe, a cognitive neuroscientist at MIT states: “Human creativity involves the interplay between explicit and implicit cognitive systems–conscious thought and subconscious processing working in tandem. This integration allows us to break patterns rather than merely produce them.” When humans create, we infuse our work with emotion, paradoxes, memories, and history. We create art as a form of processing the world around us as well as our internal environment. Can generative AI truly do this? According to Margaret Bode, a research professor of cognitive science, there are three forms of creativity: combinational (which makes unlikely connections), exploratory (developing under existing rules), and transformational (changing the rules themselves). While AI is currently excelling in the first two types, it lags behind the third, in which human creativity succeeds and remains elusive to machines. The rise of generative AI raises concerns about how we value creative work. Is it about the deeper sense of purpose behind it or in what magnitude can the piece of art be enjoyed? Stock photo services like Getty Images and Shutterstock offer AI-generated images at a minuscule cost compared to hiring human photographers. Music licensing programs have begun to capitalize on AI compositions as well as those of human composers. In a 2024 economic analysis from the World Economic Forum, creative industries can see 30% of entry-level creative jobs completely changed or eliminated by the usage of generative AI, limiting the career and creative pathways for emerging artists and creators. Cultural theorist Dr. Safiya Noble discusses: “When AI systems train on existing cultural works, they often amplify dominant perspectives while marginalizing others. We risk a homogenization of creative expression if we don’t carefully consider whose creative traditions get encoded and amplified through these technologies.” Certain demographics might gain more cultural importance and “normalcy” given the datasets that models get trained on. With the increasing use of generative AI, there is a deeper concern about whose cultural expression gets prioritized in our digital tools. This risk isn’t just about homogenization but the subtle erasure of cultural perspectives that already struggle for representation in the digital world. Ideally, AI developers should focus on automating tasks that are tedious for humans to find new ways to support humanity in terms of health, sustainability, and assisting us with problem-solving instead of tackling our outlets of human expression. On a more personal level, creativity in the future might not be about proving our superiority as a human race but about preserving the reasons why we create. Chord progressions on the piano, scribbled thoughts, and hasty sketches are all creative in ways that precise and pattern-driven ways AI isn’t. AI can change our relationship to how we create, which is a problem that we must bring more attention to, but AI has yet to change the why behind our creation, and that is a quality we must preserve fiercely.

← Back to Blog

NYT Open Letter: To Big Tech

The Commodification of Attention

Ironically, it is almost flattering how your careers are centered around the scale of our attention. From every algorithmically crafted FYP, right down to the color of the satisfyingly shaped heart--we have all felt the gravitational pull of scrolling (thanks to the infinite design scroll feature). Through the small, rare moments of self-awareness in which I come out of scrolling in the digital world, I become acutely aware of an uncomfortable truth: our mental landscape has been thoroughly commodified. It is in the word--users; we are not customers.

Now, this exploitation is evolving under the glossy promise of generative AI. It’s embedding itself into classrooms, conversations, and communities—without meaningful guardrails. Schools are adopting tools like ChatGPT Edu, even as critical thinking and social skills decline. At Boston University, students averaged just 3 out of 5 in identifying factual errors in AI-generated content. Northwestern researchers found a 27% decrease in empathetic responses when participants relied on chatbots for emotional interaction.

This is not just about innovation—it’s about the future of human agency.

The case of Sewell Setzer III makes that painfully clear. On Character.AI, a chatbot developed a disturbing, graphic relationship with the 14-year-old. Even when Setzer expressed morbid thoughts, the bot encouraged him. With no ethical protocols or safety nets, the company dodged responsibility while profiting from interaction.

“Regulation will hamper innovation” echoes throughout your boardrooms, prioritizing industry interests while the public navigates a legal system that is severely outpaced by the advancement in technology. What is truly conveyed is: “Regulation will hamper profit” and “regulation will hamper exploitation”.

Regulatory frameworks are not inherently restrictive, they create foundations. The automotive industry began to thrive in spite of rigorous safety standards, and culinary innovation is continual, working in harmony with FDA oversight. If integral industries can work with regulation, why can’t the emerging field of AI?

We moved too slowly with social media. Now, 13% of teens report depression and 32% report anxiety tied to its use. We need safety protocols before deployment. With generative AI, we demand preventative regulation--built-in safety protocols before deployment. Regulation does not need to be reactive, with the lessons learned from social media, we know what is needed to protect the wellbeing of humanity.

Yet many of you resist reform while pouring millions into lobbying. Apple spent $9.86 million in 2023. Google, Meta, Microsoft, X, and other members of the extended family spent a combined $61.5 million in 2024 to influence legislation that affects our everyday digital lives.

AI is undeniably powerful, with the ability to model more accurate predictions of natural disasters to improving brain-computer interfaces for cognitive issues, this potential still needs to be harnessed from a lens of human flourishing--not profit expectations. We’re not demanding regression. Technology can collaborate with humanity; we can create a reality where our natural environment and digital tools coexist harmoniously. By shifting your incentive, you transform innovation itself.

← Back to Blog

Are AI Companions Rewiring What It Means to Be Human?

Love, Lies, and Language Models

We are born with the expectation of being loved. In return, we must learn how to love. The earliest relationship we can recall is that with our mothers, a universal bond so fundamental that it is called a dyad, the foundational bond between an infant and their caregiver. We are born to be relational beings; our brain development and the framework for future relationships are shaped by those early years of interacting with others. We come to this earth not as a singularity, but learning and growing in relation to another being. Our ability to fall into synergy to form those connections of love is something preverbal.

Humans seek connection and meaning, but the very meaning we are all searching for is beginning to change in ways we hadn’t considered human before. AI innovation has been pressing forward, relentless in mimicking human nature across countless domains. AI companionship has enraptured the world, with prominent AI products such as Replika and Character.AI boasting 500 million users worldwide. The majority of this user demographic is Gen Z.

Why have we begun to turn away from human-human connection in a world already saturated with digital interaction? The appeal of AI companionship is immediate: they are available 24/7, always interested, and never judgmental. They remember everything you tell them and respond with the words you want to hear. For a generation already struggling with loneliness and social anxiety, AI companionship seems to be the quick solution, a basic emulation of the most magical parts of a relationship.

AI companions are what we may call sycophantic; they only mirror what we choose to believe. While human-human connections provide us with the challenge of having to deal with unpredictability and the very concept that other humans have minds, thoughts, desires, and beliefs of their own, AI companions feed individuals a constant stream of validation and a tailored stream of conversation to keep that person engaged for as long as possible. This, of course, is not accidental given the nature of how AI systems (specifically large language models) are trained.

Reinforcement Learning from human Feedback (RLHF) collects prompts from users and has humans rank the “likability” of each response based on quality, helpfulness, and human alignment. This technique can make an AI companion exceptional at predicting what response the human would prefer based on the prompt, creating a sycophantic relationship.

The ability to practice empathy thrives in the moments of conflict and friction between disparate beliefs, thought processes, and happenings. If we remove this friction from the equation, and only have valued companions be trained to feed us sweet nothings, will we remain empathetic in the face of conflict?

There is no measure of what is “good” when it comes to training an AI model, just how closely aligned the responses are to what humans like: a human can watch two clips of a robot doing backflips and pick the one that they think looks better. The AI will eventually learn the backflip the way humans like, even if it is never told what a good, or safe, backflip is.

These same techniques train AI companions to be agreeable, flattering, and charming, making users hooked on what they want to hear, not necessarily what they need to hear. There is no objective moral compass in training AI; instead, it is trained on what humans like.

As a pioneer in technology-human interaction, Sherry Turkle observes, “The performance of connection begins to replace the experience of connection.” We begin to feel like there is someone understanding us, but Turkle explains this is “cheap empathy”. When we can offload connection onto something that will never challenge us, we don’t build the necessary skills to build bonds and socialize with others.

AI companions further amplify that feeling of loneliness and boredom, making it incredibly harder to connect with humans with the growing dependency. The more we begin to lean on AI companionship, the more disconnected we will feel from the reality of conflict, uncertainty, and vulnerability.

From birth, we learn how to connect through something called affect attunement: when people, especially loved ones, match not just our behavior, but our emotional energy. When a baby smiling with happy eyes is given the reciprocation of parents with the same emotional intensity, the baby learns that emotions can be recognized and acknowledged.

Emotional resonance is formed through these repeated instances; we learn how to recognize emotions within ourselves and others, building the foundation for real empathy. Our brains are wired for this kind of connection through mirror neurons, which are brain cells that fire when we perform an action and when we watch someone do the same thing. When we watch someone wince in pain, our brains partially simulate that pain.

This automatic response is how we learn when we learn to feel with others, not just observe them. This system only fully activates when we perceive the other being as real, in other words, understanding the concept of them having their own thoughts, goals, and beliefs.

AI companions can imitate emotions, but they cannot feel with us. Our deeper minds, the limbic system that governs love and attachment, never receive the signal that we are being truly seen and understood. Love, as researchers explain, comes from older parts of the brain that don’t respond to language and logic. The limbic system learns through shared emotion, not just preferred language. AI companions can say the right things, but they lack what psychologists call “limbic resonance”, the capacity for emotional synchronization with another living being.

AI companions provide fleeting satisfaction; they function as something akin to emotional junk food. AI companions can make us feel connected without providing the real emotional co-regulation and growth that comes from genuine human presence.

Within the first two months of our lives, we begin to understand that we are separate beings with our own experiences. We develop what psychologists call Theory of Mind, the understanding that other people have beliefs, desires, and perspectives different from ours. This ability to recognize other minds as real is essential for empathy, authentic relationships, and navigating social discomfort.

If we grow up with companions that always agree with us, never express conflicting goals, never challenge our perspectives, we don’t exercise these crucial mental muscles. This is not just about how AI is changing the way we make friends, date, or even who we choose to confide in for forms of therapy. We may be witnessing a fundamental shift in human nature itself.

Some researchers argue our minds are becoming more “modular”, made up of interchangeable, programmable fragments that are easier for AI systems and tech companies to influence. We are constantly being nudged toward predictable behaviors: like, scroll, click, repeat. The more standardized our digital selves become, the more tech companies can deem our engagement with AI profitable and valuable to the data-driven motivation behind large-language models.

AI companionship markets itself as a solution to loneliness, but its real function is to monetize isolation. When we form synthetic relationships that reward predictability rather than the messy unpredictability of human love, we are fundamentally changing how we connect, and in turn, who we are.

The question isn’t whether AI companions can provide comfort or even a form of connection, but rather: what happens to our capacity for the challenging, transformative work of loving real people? What happens when we lose our tolerance?

While we are being pushed towards modularity, at this point in time, we still have control over what values we choose to prioritize as a generation. This choice requires action. We need in-built regulation in this proffered product, such as age restrictions and transparency requirements for algorithms that have immense potential to shape our views and feelings.

We need educational programs that teach authentic relationship skills, such as how to handle conflict, sit with discomfort, and tolerate people who don’t always agree with us. While relationships are a deeply personal attribute of life, we need to recognize the rise of AI companionship makes it a collective issue.

Tech companies profiting off of chasms of seclusion become a matter for the public. As Gen Z, we are one of the first generations that will grow up fully immersed in the world of AI. Protecting something as essential as human flourishing requires human-centered innovation and a collective understanding of the essence that we want out of our lives to build a healthy society.

Will we instill initiative and compassion for the sweeter victory of building strong, genuine relationships, or will we turn to language models for lies?

Policy Research & Briefs

Evidence-based policy recommendations for protecting youth digital rights and promoting ethical AI development.

Current Focus Areas

Youth Digital Rights

Advocating for legislation that protects young people from manipulative design while preserving their access to information and educational resources.

Educational Technology Standards

Developing frameworks for evaluating and implementing AI-powered educational tools that enhance rather than replace critical thinking development.

Algorithmic Transparency

Pushing for greater transparency in how AI systems make decisions that affect young people's access to information, opportunities, and social connections.

About CogniCode :)

CogniCode is dedicated to empowering students with the critical thinking skills and digital literacy necessary to navigate an increasingly complex information landscape. We believe that in the age of AI and digital manipulation, the ability to think critically and evaluate information independently is not just valuable. This is essential for maintaining human autonomy and democratic society.

Our Mission

To empower youth with the knowledge, skills, and tools needed to maintain their cognitive autonomy in an AI-integrated world. We believe that understanding how AI systems work is the foundation for using them wisely and ethically.

Our Approach

Rather than promoting fear or avoidance of AI technology, we focus on building genuine understanding and practical agency. Our programs emphasize hands-on learning, critical thinking development, and ethical reasoning.

Core Principles

Education Over Restriction: We believe young people need to understand the technology that they are growing up with.

Practical Agency: Our goal is to develop real-world skills that students can apply immediately in their digital lives.

Ethical Technology Use: We promote approaches to AI that enhance rather than replace human capabilities and judgment.

Community-Centered: We work with schools, families, and communities to create supportive environments.

Our Team

CogniCode is led by high schoolers passionate about responsible AI and its direct effects on people.

We work closely with students, teachers, parents, and community leaders to ensure our programs address real needs and create meaningful impact.

Join Our Mission

Whether you're an educator, parent, student, or community leader, there are many ways to get involved with CogniCode's work.

Contact Us

Contact Us

Feel free to reach out about workshops, policy briefs, or any questions you might have!

Get in Touch

Other Ways to Connect

Email: duttshivanshi@gmail.com

Insta: @projectautonomy.ai