visarga 18 hours ago

I like this approach. It's not that intelligence is in the brain or the model, and it should not be a noun. It is a process, a search process based on exploration and learning.

Think of a river - the water carves the banks, the banks channel the water. Which is the real river?

A model is like the river banks, it is not intelligent in itself. Neither is activity by itself. Intelligence comes from their co-constitutive relation.

You can see the structure+flow coupling in many places. The model weights (banks) vs the training set (water). The contextual tokens (banks) vs the mass of probability for the next token (water). Agent environment (banks) vs agent activity (water).

The trick here is that neither is capable of explaining the process, or more fundamental than the other. It is a recursive process rewriting its own rules.

And we know from Godel, Turing and Chaitin that recursive processes exhibit incompleteness, undecidability and incompressibility. There is no way to predict them from outside, they can only be known only through unrolling, you have to be inside, or you have to be it to know it.

  • patcon 9 hours ago

    You are one of the few random HN users who have your own page in my personal knowledgebase, because I consistently see your wonderful comments and i can only hope to be swimming in parallel to your perspectives :)

    Thanks for your great contributions, especially your dedication to metaphor!

  • chrisweekly 14 hours ago

    +1, Brilliant metaphor, thanks for your thought-provoking comment!

  • close04 3 hours ago

    > Think of a river - the water carves the banks, the banks channel the water. Which is the real river?

    Fantastic metaphor.

    I also can't help but draw a parallel (or even an overlap) with how the process works for humans. Once you put aside the intrinsic "model" (brain) differences (nature vs. nurture), humans develop their intelligence the same way: relentless exploration, and decision making to guide that exploration. We don't just funnel new info into a child's brain, they're allowed to explore threads around that, and take different paths which then guide how to process the next bit of info.

    If we're looking at building anything like human intelligence, the only advanced general intelligence we know and understand to a degree, exploration will be critical.

webdevver 18 hours ago

speaking of exploration, has anyone ever thought about non-cartesian displays? say, pixels that are arranged in a hexagonal pattern. or pixels arranged in a radial pattern and addressed via polar coordinates.

what pattern are human color/light sensors arranged in? maybe we should replicate that pattern? an organic arrangement discovered via simulated annealing?

all this bitmap x/y display stuff is very pre-AI-ish. old tech. victorian era clockwork mechanism. built to make it easy to reason about for humans, before the advent of neuron-soup. maybe we can do better?

  • AlotOfReading 14 hours ago

    Yes, it's been well-explored and there's a number of clever ways to convert them back to Cartesian coordinates because those are much nicer to work with for humans.

    The very earliest color CRTs used triangular arrangements of subpixels for each color, for example.

  • card_zero 11 hours ago

    This pattern?

    https://commons.wikimedia.org/wiki/File:ConeMosaics.jpg

    Maybe we should put the electronics in a layer in front of the display, make them transparent, and use them to filter the light. The vertebrate retina does something like that, putting nerves in front of light sensors. Imitating evolved solutions isn't always a good idea. Nature's design process is like: start with the wrong parts for the job. Lay them out back to front. Wire them together the long way round. Adapt this design slightly and latch on to any convenient side effects. Iterate ten billion times. Now it works pretty good! Or it doesn't, but the species survives anyway, which is also fine since there is no goal.