User:Wyn/Special Issue 27/Prototype
Prototype: interview with tools
1. Tools?
Physical: Inkjet Printer, laser cutting, ceramic, needle, weaving machine; scissors; Virtual: Internet; Web search; AI assistant; Visual studio code; Translator; CSS; JS; HTML;
2. Environment?
The virtual tools are based on the Internet, they are using massive information and databases, connecting algorithms and complex interfaces.
3. response-ability interview
·Where? And who speaks to whom? ·It happened in the virtual environment. When I need to do some related research about my project, I will ask Claude and do some research. I need feedback, it also helps me organise my words and ideas. Even though I talked about AI, I know the actual system I talked with is the LLM.
·How does the LLM respond to my question and request? ·Anthropic follows a two-step approach to LLM research. First, it identifies features, which are interpretable building blocks that the model uses in its computations. Second, it describes the internal processes, or circuits, by which features interact to produce model outputs. Because of the model’s complexity, Anthropic’s new research could illuminate only a fraction of the LLM’s inner workings. But what was revealed about these models seemed more like science fiction than real science.
·Can I feed-back? ·Half-half? I had many conversations with AI assistant, they provided me different perspectives on the same topic and help me dig deeper. However, it would not tell whether the processes they make and the output is match their processes or not. When I point out the answer has some logical problems, they cannot provide feedback to the LLM directly; they would acknowledge their problem first and insist their output is correct. They’re designed to respond to me, but their ability to receive my feedback is limited by technical and organisational boundaries. True response-ability would require bidirectional learning and transparent processes, which remain an aspiration rather than a reality in current AI systems.
·How do AI assistants respond to my requests and questions? ·For example, Claude sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal “language of thought.” We show this by translating simple sentences into multiple languages and tracing the overlap in how Claude processes them. Claude will plan what it will say many words ahead, and write to get to that destination. We show this in the realm of poetry, where it thinks of possible rhyming words in advance and writes the next line to get there. This is powerful evidence that even though models are trained to output one word at a time, they may think on much longer horizons to do so.
·How to evaluate LLM response-ability? Setting up several standards: accuracy, relevance, coherence, and most important things are handling ambiguity and bias.
·How to learn about LLM’s internal working and improve my thinking process? ·A shared abstract space where meanings exist and where thinking can happen before being translated into specific languages. More practically, it suggests Claude can learn something in one language and apply that knowledge when speaking another. Studying how the model shares what it knows across contexts is important to understanding its most advanced reasoning capabilities, which generalise across many domains. In other words, the model is combining independent facts to reach its answer rather than regurgitating a memorised response.