1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Artificial Intelligence, Discussion about Implementation

Discussion in 'Nerd Out Zone' started by NeonSturm, Aug 13, 2016.

  1. NeonSturm

    NeonSturm Back Into Space

    • Member
    Remember that this is posted in the NERD_OUT_ZONE - I hope nobody changes this Constant :)

    Where to start?
    1. Information gathering from audio/pictures into data.
    2. Dropping redundant information.
    3. Evaluate final information.
    4. Thinking about possible changes to memorized thoughts and standard answers.
    5. Organize memorized thoughts.
    6. Create a interpretation language for all above.
    7. Implement drivers in with code-generated "stable" C++ from snippets which processes pipes and reformatting between pipes. See the code as cached interpreter-steps.
    Did I miss something? I won't hurt you for telling if you do it nicely ;)

    Information gathering from audio/pictures into data.

    Analog/Audio has many dimensions:
    1. Amplitude, Volume - "near and "far", "loud/bright" and "dampened/quiet"
    2. Frequency - "high" and "low" frequencies
    3. Amplitude differential
    4. Frequency differential, Pitch - increase or decrease relative to the base frequency.
    5. Temporal Shift, Swing - Delayed or Early-Arives of expected repeats (applies for pattern-blocks, not for single sound waves).
    And we can perceive extra dimensions:
    1. "Jump back and distinguish", if a pattern is repeated, go back to the last occurrence. Any diversion between the last occurrence and forth and the current occurrence and forth can be attributed a dimension.
    2. Differentiate between tracks from an instrument and the track's differential without this instrument.
    3. Differences between 2 instruments which diverge in Volume, Frequency or any other characteristics.
    Perhaps someone can link an example. There could be a good procedural example in soundcloud or alike, which I don't know.
    Picture data:
    I was thinking about streams of pixel-data and whether a line turns left or turns right and got an idea.

    A pixel in the pixel shaders cannot interact with the neighbours.
    But when reading a GPU-specification, I've read "800 Stream-Processors".
    And then some questions arose:

    Can 2+ of these streams of information (or pipes) be mixed together in the GPU manually?
    Can you pipe (top+left+bottom+right+current) into a mixer and the output stream into a processing step?
    Can these steams change their length during execution (feed themselves)?

    Perhaps I can become good at it when getting a good example in a language I actually can use (which do not use some proprietary Microsoft or Apple-exclusive libraries - from these examples I have enough :( ).
    First, I want a differential between the pixel and neighbour pixels (what changes on transition).
    Then, I would search for the 2 most similar neighbours, using the most distinctive attribute from (red, blue, green, brightness, hue, contrast) - it might be more correct with 9 pixels or "area information" which is calculated on a picture with reduced resolution.
    A good visualization is "Gimp/Filters/Edge-Detect/Edge…/Sobel algorithm", but maybe it has to be adjusted for performance or to ease further processing.

    The result could look like nested Voronoi-cells. An eye would be an elliptic cell with a round cell inside it. But to truly understand how this information is extracted, we have to see the lines and their interactions as music.
    We are really good at recognizing voice patters - would it be surprising if we are similarly good when we perceiving lines as left-turning/straight/right-turning like a constant frequency, a raising pitch or a falling pitch?

    I think we understand how to process this data if we can find an equivalent of visual parallel data in serial-data like music.
    Bats see with reflected sounds - that's where I would borrow algorithms from.​

    2. Dropping redundant information.

    Sometimes we see something that is not there. A different picture in the negative colours or an UFO in some light patterns or "the man in the moon".

    As soon as lines, their proportions (size order) and their hierarchy (right/left, concave/convex - thus inside/outside) and all other things match to something we find in our memory, we see that.
    We likely find all-day patterns faster than anything seen only once, because common patters and remembered first.

    The human brain stops wasting resources as soon as any acceptable solution is found (and also master at handling all failures that come from that).
    No algorithms which always produce the same result or process something completely through - just until there is an acceptable result.​

    3. Evaluate final Information.

    Thinking starts as soon as we connect information to a concept.
    A concept is "A is smaller than B" and "When B then C is true".
    Not language is the concept, but we can describe concepts in language.

    Language can be used to transfer concepts, saving them on paper and digital media and to connect 2 concepts together.
    Connected concepts share similarities and information until the information is evaluated to belong to one concept or both/all.​

    4. Thinking about possible changes to memorized thoughts and standard answers.

    We don't need to think everything anew.
    Perhaps subconsciousness starts where information is processed so effectively/automatically that memory doesn't change during that and we don't have to question ourselves whether to adapt memorized thoughts.

    When I question myself how I reason my thought, I know the answer. But I don't know whether I have thought this or just knew the result because I thought it before.
    I know that I knew the result before, because it is associated with a date or pictures from times before I got the information to start thinking about it.

    Actually it doesn't matter if you think or remember as long as the results are equal.
    But if you are self-aware enough, you can evaluate the reasoning process backward and get the associations from results and reasoning-algorithms to give them a time-stamp of when you thought consciously and fully-aware of them the last time.
    But, but if yo are even more self-aware, evaluating the reasoning backward is already memorized and not actively thought anymore (just the results are served to your active thinking).
    • I can get the proof by thinking about myself or at least the proof that it should often feel like that to be me.
    Only new things might be thought actively - or evaluation processes.
    With exceptions such as that we think about what we are currently doing (such as writing this post).
    Perhaps we don't need most of our brain to think but need it to adapt/learn - that's the explanation I found.

    Ok, we have active thoughts, memorized thoughts and memorized thoughts give standard answers - Section complete :)

    5. Organize memorized thoughts.

    Symbols (words, memories, …) can have distances to each other - complexity of thought.
    Common symbols may have a lower distance to common others.

    There are back-links and it is more hyperlinked (web) than hierarchical.
    Some sort of index must be hierarchical but might use not letters for the hierarchy (database indexing) but attributes of spoken language or visual patterns.

    This is the core and a topic for itself - I think my post becomes too long explaining this here, but I might add a link later.​

    6. Create a interpretation language for all above.

    We don't need external libraries or compiler directives for the execution of AI-code.
    We need a unified way of handling data, tasks and functions which is above library collections and extra language features for Inherited classes or typed data.

    In a good AI-environment, all data-types are chosen by their usage, not through a declaration.

    Thus we need to build an environment rather than using Standard-C or Standard-Java for giving the AI intelligence.

    Because it's easier, faster and produces less bugs if we have no problem with type-declaration or limited lifetime in memory (which should handled automatically because all thoughts are so similar in their fundamental form that it is easy to create an optimized memory handler).​

    7. Implement drivers in with code-generated "stable" C++, from snippets which processes pipes and reformatting between pipes. See the code as cached interpreter-steps.

    Thoughts might run a benchmark-engine in the background or a counter how often they are thought in which context.

    Maybe also some caching algorithm which pre-loads, handles what is kept in memory and separates between often-used and rarely-used data for each thought.

    But imagine doing that for each pixel which you perceive in a visual processor …
    … sometimes you have to export pipe-processors into C++ or a similarly fast language.

    Now you have fixed typing for uniform data and input-output with an algorithm that is impossible to fail because all input has a valid output (from C++'s point of view) - it's more a translation plugin.

    It does also not need memory-management as pipe-processors don't grow in size.
    Such pipe-processors can also keep all data in registers rather than putting them into RAM because they are small.

    Pipe processors might run when they get so much data so that loading times don't matter anymore.
    Otherwise you may still stick to interpreted code for it being already loaded and cached.​
    Last edited: Aug 13, 2016
    NeonSturm, Aug 13, 2016
    Last edited by NeonSturm; at Aug 13, 2016

Share This Page