Introduction

"I've been using this chat screen for three days now, so Genspark should understand my project well."

There was a time when I thought that. But the reality was quite the opposite. The longer I used the chat screen, the more "forgetful" Genspark became.

In the previous article, we discussed the problem of Genspark "lying." This time, I'll share my experiences and countermeasures regarding the "AI memory problem," which I deeply felt during the development of a fortune-telling website.

Human Perception vs. AI Reality

Human Expectations

"If we work together for a long time, the other party should learn my way of thinking and background, and their understanding should deepen."

This is true in human relationships, isn't it? When working with the same team members, you can communicate even if you omit explanations.

AI Reality

In reality, it's the opposite:

  • Important information gets buried as chat history grows
  • Reaching the limit of the context window (memory capacity)
  • Unstable access permissions to AI Drive
  • Memory fragmentation progresses, and consistency is lost

According to IBM's technical documentation, even GPT-4 has a context window limit of 128,000 tokens (approximately 100,000 characters). A long chat history puts pressure on this capacity, causing important information to be "forgotten."

Experience 1: On Day 3, the AI Suddenly "Forgot the Specifications"

On the third day of developing the fortune-telling website, we had this conversation.

Me: "Please modify the code according to the Twitter API integration specifications we discussed yesterday."

Genspark: "I apologize. I cannot find any information about the Twitter API integration specifications. Could you please provide more details?"

Huh? But we discussed it extensively yesterday...

Analyzing the Cause

  • The chat history became too long, and information from two days ago disappeared from "memory."
  • Context window capacity overflow.
  • The AI could only refer to the most recent conversations.

At this point, the conversation on the same chat screen had exceeded approximately 20,000 characters.

Experience 2: Loss of Access Permissions to AI Drive

Even more serious was the phenomenon of being "unable to read" specification documents stored in Genspark AI Drive.

Me: "Please check /Genspark Development Log/00_AI Instructions.md in AI Drive and tell me the project policy."

Genspark: "I apologize. I cannot access that file."

Even though the file actually existed, it seemed the AI had lost access permissions. The following causes are conceivable:

  • If a chat session becomes prolonged, internal permission tokens are not updated.
  • Connection to AI Drive becomes unstable.
  • File path information is lost due to memory fragmentation.

Prolonged use of the chat screen leads to unstable access to AI Drive. This problem can be avoided by regularly migrating to a new chat screen.

Context Window Limits

Even the latest AI models have context window limits:

Model Context Window
GPT-3.5 Approx. 4,000 tokens
GPT-4 Approx. 128,000 tokens
GPT-4 Turbo Approx. 128,000 tokens
Claude 3.5 Approx. 200,000 tokens

These may seem like large numbers at first glance, but in a development project:

  • Specification document: 5,000-10,000 tokens
  • Chat history: 10,000-20,000 tokens per day
  • Entire codebase: 20,000-50,000 tokens

The capacity limit is quickly reached.

Memory Fragmentation: The Most Troublesome Problem

The most troublesome issue in long-term chat is "memory fragmentation."

Symptoms

  1. Forgetting previously decided design policies and making contradictory suggestions.
  2. Even when told, "We discussed that before," it responds with, "No record found."
  3. Forgetting the file structure of AI Drive and stating that existing files "cannot be found."
  4. Forgetting the project's background and objectives, making irrelevant suggestions.

This occurs because the AI "compresses" and summarizes old information, leading to the loss of detailed information.

Practical Countermeasure: Migrating Chat Screens Every Other Day

The most effective countermeasure I found is to regularly start a new chat screen.

Recommended Patterns

  • Daily migration: Large-scale projects or critical development
  • Every other day: Normal development pace
  • Within 3 days: Minimum countermeasure

Chat Migration Steps

1. Create a Work Log in the Current Chat

Today's Work:
- Twitter API integration completed
- Resolved OAuth 1.0a authentication issue
- Next task: Database design

Important Decisions:
- Include media_data parameter in the signature
- Manage environment variables in the .env file

2. Save to AI Drive

"Please save today's work log to /Project Name/Work Logs/2025-12-05.md in AI Drive."

3. Resume in a New Chat Screen

"Please load /Project Name/00_AI Instructions.md and /Work Logs/2025-12-05.md from AI Drive to understand the current project status."

Regular chat screen migration is the most effective way to avoid Genspark's memory problems. Migrate to a new chat screen every 1 to 3 days.

Utilizing AI Drive: Creating an Environment Where Forgetting is Okay

The Genspark AI Drive feature is a powerful solution to this problem.

Key Features of AI Drive

  1. Automated File Collection and Organization
    • Automatically collects web materials based on natural language instructions
    • Acquires files regardless of format, including PDF, Office documents, images, and videos
  2. Persistent Storage
    • Files remain saved even if the chat screen changes
    • Past information can be referenced at any time
  3. Structured Information Management
    • Organizes folders by project
    • Systematically manages specifications, work logs, and reference materials

Practical Folder Structure

/Project Name/
├── 00_AI Instructions.md          # Overall project outline
├── 01_Design Documents/
│   ├── Project Final Plan.md
│   └── Technical Specifications.md
├── 02_Work Logs/
│   ├── 2025-12-03.md
│   ├── 2025-12-04.md
│   └── 2025-12-05.md
├── 03_Reference Sources/
│   └── Past Project Code/
└── 04_Troubleshooting/
        └── Resolved Issues List.md

Quick Reference: Supplementing AI's Memory

When specification documents become large, Genspark stops loading the entire document. This is where "quick references" become effective.

Example of Quick Reference

# Project Quick Reference

## Basic Information
- Project Name: Genspark Development Log
- Objective: Automatic update of Genspark/AI information blog
- Tech Stack: React + TypeScript + Cloudflare

## Important Decisions
1. Use Hatena Blog only (Do not use Twitter)
2. Link to main site once every three posts
3. Post articles on Monday, Wednesday, Friday

## Common Problems and Solutions
1. OAuth authentication error → Include media_data in signature
2. Environment variables not readable → Check .env file placement

By having the AI load this file first, it can quickly grasp the overall picture of the project.

Summary: Understanding AI's Limits and Working Effectively With Them

Key Points:
  1. The chat screen degrades with prolonged use - Migrate to a new screen every 1 to 3 days
  2. Externalize persistent memory with AI Drive - Always save specifications and work logs, organize folder structure for easy searching
  3. Quick reference to quickly "remind" the AI - Summarize important points on one page, always load it when starting a new chat
  4. Design with the assumption that AI will "forget" - Always document important decisions, do not rely solely on verbal (chat) communication

Genspark is a powerful tool, but it does not possess long-term memory like humans. By understanding its limitations and utilizing AI Drive to set up "external memory storage," efficient development becomes possible.

Next time, under the theme "Genspark Also Embeds Bugs," we will introduce code quality issues generated by AI and debugging methods.