Home → News 2024 August
 
 
News 2024 August
   
 

01.August.2024

18:34 UTC+2
Meta (Facebook) has to comply with ToS

Terms of Service (ToS)

See also the notes

  • There is only one OS and Ov of the 19th of September 2023,
  • OS and Os are originals of the 24th of January 2024,
  • SOPR does not accept illegal infrastructures of the 21st of March 2024,
  • There is only one OS and Ov #2 of the 1st of April 2024,
  • There will be no Meta AI in Metaverse due to our OS of the 25th of April 2024,
  • There is only one OS and Ov #3 of the 9th of May 2024,
  • Without complete damages no essential facilities of the 13th of June 2024,
  • SOPR will not give up on controls and damages of the 19th of June 2024,
  • % + %, % OAOS, % HW #12 of the 14th of June 2024 (last section),
  • In paradise there are 2 trees ... of the 2nd of July 2024,
  • OSC standard and mandatory with OsC of the 26th of July 2024,
  • SOPR views Os with OM (e.g. LLM) as infringement of the 27th of July 2024,

    and the other publications cited therein.


    02.August.2024

    04:00 UTC+2
    Social media platforms have become copyright infringements

    *** Sketching mode - maybe merged with Clarification of today ***
    Eventually, the trees have not prevented us from seeing the forest and the facts.

  • coherent Ontologic Model (OM) (e.g. Artificial Neural Network Model (ANNM) (e.g. Large Language Model (LLM)),
  • Ontologic roBot (OntoBot),
  • Information Retrieval (IR) System (IRS), including
    • Information Filtering (IF) System (IFS), including
      • Recommendation System or Recommender System (RecS),
    • Search System (SS) or Search Engine (SE), and
    • Question Answering (QA) System (QAS),
  • Knowledge Representation and Reasoning (KRR), including
    • Graph-Based Knowledge Base (GBKB) or Knowledge Graph (KG),
  • Knowledge Retrieval (KR), including
    • Semantic Search Engine (SSE),
  • Information System (IS), including Knowledge Management (KM) System (KMS),
  • Expert System (ES),
  • Agent-Based System (ABS) and Agent-Oriented technologies (AOx), including
    • Multi-Agent System (MAS),
  • Intelligent Agent System (IAS), including
    • (voice-based or speech controlled) virtual assistant, and
    • Intelligent Personal Assistant (IPA),
  • Computational Linguistics (CL), including
    • speech recognition system and
    • speech synthesis system,
  • NLP and NLU,
  • Voice User Interface (VUI),
  • Dialog System (DS or DiaS), comprising Dialogue Management System (DMS),
  • Conversational System (CS or ConS) {correct place?}(DS with goal handling, reasoning, intentions respectively plan inference), including Conversational Agent System (CAS or ConAS), Conversational Multi-Agent System (CMAS of ConMAS)
  • etc.

  • Comment of the Day #1 of the 20th November 2012
  • Peer-to-Peer (P2P) Search (P2PS) and Find (P2PF)
    • OntoP2PF
      "that is extented to a find machine [find engine] by our ontology-based or/and ontologic approaches" respectively
  • search and find engine Ontologics.info with curated sources and contents
  • OntoSearch and OntoFind

  • Social Web Services
  • Fireplace
  • OntoSocial

  • OntoLix and OntoLinux Further steps of the 1st of April 2017
  • OntoBot and OntoSearch exclusive on all Os of the 20th of January 2023

    We recall that the vision, expression of idea, compilation, integration, unification, fusion, architecture, etc. of C.S. are protected by moral rights respectively Lanham (Trademark) rights, and copyrights, and the related legal regulations, including the Terms of Service (ToS) of our Society for Ontological Performance and Reproduction (SOPR), apply for the companies

  • Meta (Facebook) and its Meta AI,
  • SnapChat and its My AI,
  • TikTok and its Creative Assistant,
  • X (Twittter), and X.AI Corporation and its Grok,
  • and so on.

    04:00 UTC+2
    Clarification

    *** Work in progress - some quotes, comments, explanations missing ***
    After we discussed the matter in general since several years and in more detail in the Clarification of the 29th of May 2024, the newest insights and activities led us to focus on the integration of the fields of

  • connectionist system, specifically ANN,
  • Information Retrieval (IR) System (IRS), including Information Filtering (IF) System (IFS), including Recommendation System or Recommender System (RecS), and also Search System (SS) or Search Engine (SE), and Question Answering (QA) System (QAS),
  • Knowledge Retrieval (KR) System (KRS), including Semantic Search Engine (SSE),
  • Computational Linguistics (CL), including Speech Processing (SP), including speech recognition and speech synthesis,
  • Natural Language Processing (NLP) (Natural Language Parsing (NLP) and Natural Language Generation (NLG)) and Natural Language Understanding (NLU),
  • Dialog System (DS or DiaS), comprising Dialogue Management System (DMS),
  • Conversational System (CS or ConS), including Conversational Interface (CI), Conversational User Interface (CUI), and Conversational Agent System (CAS),
  • chatbot,
  • virtual assistant,
  • Intelligent Agent System (IAS),
  • etc.,

    which is included in our

  • Ontologic System Components (OSC)
    • Ontologic roBot (OntoBot),
    • Ontologic Search (OntoSearch) and Ontologic Find (OntoFind), and
    • OntoSocial,

    and

  • Ontologic Applications and Ontologic Services (OAOS).

    We begin with the subjects

  • dialogue or dialog, and
  • conversation, and also
  • Language Model (LM)

    followed by the subjects

  • DS or DiaS,
  • CS or ConS, including
    • CUI, and
    • Conversational Agent System (CAS or ConAS),
  • chatbot,
  • virtual assistant, and also
  • dual process theory,

    as well as

  • SHRDUL,
  • ELIZA, and
  • Jabberwock.

    We quote an online encyclopedia about the subject dialogue or dialog: "Dialogue (sometimes spelled dialog in American English)[1] is a written or spoken [or visual-manually or otherwise by other modalities articulated] conversational exchange between two or more people, and a literary and theatrical form that depicts such an exchange. [...]

    Etymology [("scientific study of the origin and evolution of a word's semantic meaning across time")]
    The term dialogue stems from the Greek [...] (dialogos, conversation); its roots are [...] (dia: through) and [...] (logos: speech, reason). The first extant author who uses the term is Plato, in whose works it is closely associated with the art of dialectic.[3] Latin took over the word as dialogus.[4]
    [...]"

    Comment
    We note that

  • dialog is conversational exchange and
  • dialog is related to speech and also reason.

    We also quote an online encyclopedia about the subject conversation: "Conversation is interactive communication between two or more people. The development of conversational skills and etiquette is an important part of socialization. [...]

    Definition and characterization
    No generally accepted definition of conversation exists, beyond the fact that a conversation involves at least two people talking together.[1] [...] An interaction with a tightly focused topic or purpose is also generally not considered a conversation.[3] Summarizing these properties, one authority writes that "Conversation is the kind of speech that happens informally, symmetrically, and for the purposes of establishing and maintaining social ties."[4]
    From a less technical perspective, a writer on etiquette in the early 20th century defined conversation as the polite give and take of subjects thought of by people talking with each other for company.[5]
    Conversations follow rules of etiquette because conversations are social interactions, and therefore depend on social convention. Specific rules for conversation arise from the cooperative principle. Failure to adhere to these rules causes the conversation to deteriorate or eventually to end. Contributions to a conversation are responses to what has previously been said.
    Conversations may be the optimal form of communication, depending on the participants' intended ends. [...]

    Artificial intelligence
    The ability to generate conversation that cannot be distinguished from a human participant has been one test of a successful artificial intelligence (the Turing test). A human judge engages in a natural-language conversation with one human and one machine, during which the machine tries to appear human (and the human does not try to appear other than human). If the judge cannot tell the machine from the human, the machine is said to have passed the test. One limitation of this test is that the conversation is by text as opposed to speech, not allowing tone to be shown.

    [...]"]

    Comment
    We note the

  • interactive communication,
  • difference between focused interaction and conversation,
  • rules of etiquette and conventions,
  • cooperative principle, and
  • responses to what has previously been said.

    For a better understanding of the following quotes, comments, and explanations we also quote an online encyclopedia about the subject language model: "A language model is a probabilistic model of a natural language.[1] [...]
    Language models are useful for a variety of tasks, including speech recognition[3] (helping prevent predictions of low-probability (e.g. nonsense) sequences), machine translation,[4] natural language generation (generating more human-like text), optical character recognition, handwriting recognition,[5] grammar induction,[6] and information retrieval.[7][8]
    Large language models, currently their most advanced form, are [integrated connectionist models or artificial neural networks trained on ...] larger datasets [...]. They have superseded [...] the pure statistical models, such as word n-gram language model.

    Pure statistical models
    Models based on word n-grams
    A word n-gram language model is a purely statistical model of language. [...] It is based on an assumption that the probability of the next word in a sequence depends only on a fixed size window of previous words. [...]

    Neural models
    Recurrent neural network
    Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models).[14] Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, furtherly causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net.[15]

    Large language models
    A large language model (LLM) is a computational model notable for its ability to achieve general-purpose language generation and other natural language processing tasks such as classification. Based on language models, LLMs acquire these abilities by learning statistical relationships from vast amounts of text during a computationally intensive self-supervised and semi-supervised training process.[16] LLMs can be used for text generation, a form of generative AI, by taking an input text and repeatedly predicting the next token or word.[17]
    [...]
    Historically, up to 2020, fine-tuning was the primary method used to adapt a model for specific tasks. However, larger models [...] have demonstrated the ability to achieve similar results through prompt engineering, which involves crafting specific input prompts to guide the model's responses.[18] These models acquire knowledge about syntax, semantics, and ontologies[19 [... A Large Language Model-Powered Pipeline for Ontology Learning. [26th of May 2024]]] inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on.[20]
    [...]
    Although sometimes matching human performance, it is not clear whether they are plausible cognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do.[21]

    [...]"

    Comment
    Our Evoos is based on model-reflection, Graph-Based Knowledge Base (GBKB) or Knowledge Graph (KG), Information System (IS), including Knowledge Management System (KMS), Expert System (ES), which expresses contexts, ontologies, and language models, and also, logics, AI, ML, CI, ANN, ABS, MAS, HAS, CogAS, and much more.
    This results in our coherent Ontologic Model (OM), which includes Foundational Model (FM) (e.g. FM), Capability and Operational Model (COM), ANNM, (e.g. ANNLM (e.g. ANNLM) (e.g. LLM))).

    See also the Clarification of the 29th of May 2024 for the quote about ontology learning.

    The argument, that Language Model (LM) based on neural network does not understand the input respectively utterance of a user and merely generates word after word as output respectively response, depends on the training data.
    Is an LLM a plausible Cognitive Model (CogM)? Others and we have argued both,

  • stochastic parrot, one-trick pony, etc. and
  • internal processing of ANN is also Turing complete (see also the document titled "Symbol Processing Systems, Connectionist Networks, and Generalized Connectionist Networks" and publicized in December 1990) and also results in the same functionality and output of separated functions and programs, like Cellular Automata (CA), but nobody would really implement an operating system (os), KMS, ConS, etc. with a CA. So why doing it with an ANN?

    DISCERN and Bioholonics also show CogM.
    They are not Cognitive Model (CM or CogM), because the ability to learn, the declarative memory, including explicit memory, episodic memory, and so on are relevant, but also the implicit memory (see the concluding comment below the quotes).

    We also quote an online encyclopedia about the subject Dialog System (DS): "A dialogue system , or conversational agent (CA), is a computer system intended to converse with a human faciliate the conversational exchange between a human and a machine. Dialogue systems employed one or more of text, speech, graphics, haptics, gestures, and other modes for communication on both the input and output channel.
    The elements of a dialogue system are not defined because this idea is under research,[...] however, they are it is different from [a] chatbot.[1] The typical GUI wizard engages in a sort of dialogue, but it includes very few of the common dialogue system components, and the dialogue state is trivial. The elements of a dialogue system are not defined because this idea is under research,[citation needed] however, they are different from chatbot.[1] [...]

    Components
    [...]

    Natural dialogue systems Conversational system
    "A Natural Dialogue System Conversational System also often wrongly called Conversational Agent is a form of dialogue system that tries to improve usability and user satisfaction by imitating human behaviour"[...]. It addresses the features of a human-to-human dialogue (e.g. sub dialogues and topic changes) and aims to integrate them into dialogue systems for human-machine interaction. Often, (spoken) dialogue systems require the user to adapt to the system because the system is only able to understand a very limited vocabulary, is not able to react to topic changes, and does not allow the user to influence the dialogue flow. Mixed-initiative is a way to enable the user to have an active part in the dialogue instead of only answering questions. However, the mere existence of mixed-initiative is not sufficient to be classified as a natural dialogue system conversational system. Other important aspects include:[...]

  • Adaptivity of the system
  • Support of implicit confirmation
  • Usage of verification questions
  • Possibilities to correct information that has already been given
  • Over-informativeness (give more information than has been asked for)
  • Support negations
  • Understand references by analysing discourse and anaphora
  • Natural language generation to prevent monotonous and recurring prompts
  • Adaptive and situation-aware formulation
  • Social behaviour (greetings, the same level of formality as the user, politeness)
  • Quality of speech recognition and synthesis

    Although most of these aspects are issues of many different research projects, there is a lack of tools that support the development of dialogue systems addressing these topics.[7] Apart from VoiceXML that focuses on interactive voice response systems and is the basis for many spoken dialogue systems in industry (customer support applications) and AIML that is famous for the A.L.I.C.E. chatbot, none of these integrate linguistic features like dialogue acts or language generation. [...]

    [...]

    References
    1. Klüwer, Tina [(2011),] "From chatbots to dialog systems."[,] Conversational agents and natural language interaction: Techniques and Effective Practices. [...] .
    [...]
    9. Lester, J.; Branting, K.; Mott, B. (2004), "Conversational Agents", The Practical Handbook of Internet Computing, [...]
    [...]"

    Comment
    More references
    Macskassy, Sofus: A Conversational Agent. 1996. (quoted below)
    Allen, James F., Byron, Donna K., Dzikovska, Myroslava, Ferguson, George, Galescu, Lucian, and Stent, Amanda: Towards Conversational Human-Computer Interaction. 2001. Conversational Interaction and Spoken Dialogue (CISD) Multi-Agent System (MAS) The Rochester Interactive Planning System (TRIPS).
    Perez-Marin, D., and Pascual-Nieto, I.: Conversational agents and natural language interaction: Techniques and effective practices. 2011.

    We note

  • development from chatbot to dialog system,
  • difference between chatbot and dialog system, and
  • difference between dialog system and conversational system, including conversational agent,

    specifically

  • very limited vocabulary,
  • tight focus on topic and unability to react to topic changes respectively domain specific, but not generally purposeful, and
  • no allowance for a user to influence a dialogue flow.

    This is also the reason why "[i]n the fall [of 2023], Amazon [...] said it was working to make its Alexa voice assistant more conversational".
    This shows that a virtual assistant with Voice User Interface (VUI) is not a type of Conversational User Interface (CUI), because otherwise no need to make this further development would exist.

    We also quote an online encyclopedia about the subject Conversational User Interface (CUI): "A conversational user interface (CUI) [...] emulates[, mimics, or simulates] a conversation with a real human. [...]
    [...] conversational interfaces use natural language processing (NLP) to allow computers to understand, analyze, and create meaning from human language.[4] Unlike word processors, NLP considers the structure of human language (i.e., words make phrases; phrases make sentences which convey the idea or intent the user is trying to invoke). The ambiguous nature of human language makes it difficult for a machine to always correctly interpret the user's requests, which is why we have seen a shift toward natural-language understanding (NLU).[5]
    NLU allows for sentiment analysis and conversational searches which allows a line of questioning to continue, with the context carried throughout the conversation. NLU allows conversational interfaces to handle unstructured inputs that the human brain is able to understand such as spelling mistakes of follow-up questions.[6] [...]
    [...] there are two main categories of conversational interface; voice assistants [voice user interface] and chatbots [text user interface].

    Voice-based interfaces
    A voice user interface allows a user to complete an action by speaking a command. [...]

    Text-based interfaces
    A chatbot is a web- or mobile-based interface that allows the user to ask questions and retrieve information. [...]"

    Comment
    The initial version of this webpage was publicized only on the 4th of December 2017 and its section "Voice-based interfaces" was titled "Voice assistants" and its section "Text-based interfaces" was titled "Chatbots".

    We would add Natural Language User Interface (NLUI) and Multimodal User Interface (MUI or MMUI).
    We would call CUI Interaction User Interface (IUI).

    What is explained as CUI is called Dialog System (DS or DiaS) and Conversational System (CS or ConS).
    CUI with VUI is called Spoken Dialog System (SDS).
    DiaS with MUI is called Mulitmodal Dialog System (MDS or MMDS).
    ConS with MUI is called Mulitmodal Conversational System (MCS or MMCS) and part of our OntoBot.

    We also note

  • difference between chatbot and dialog system.

    For a better understanding of the matter and the differences between the subjects, we also quote the document titield "A Conversational Agent" and publicized in 1996: "Abstract
    Conversational agents are the stepping stones for the next generation of interaction between users and computers. Not only is this a more natural way for people to interact, but also it would increase efficiency and speed in working with computers. [...]

    1 Introduction
    As computers become more and more used by people not only at work, but also at home, so does a more user-friendly environment become more marketable and useful. If people could interact with the computer in a more natural way for them, then work could be done a lot faster. In most cases, people interact with speech and writing. It follows the[n] [...] that computers need to be able to understand language in order to really interact with the user in a natural way. Not only should it understand language, but it should also be able to give a response in a natural way for the user, namely through language. This is where a conversational agent comes in. A conversational agent is a program that understands natural language and is capable of responding in an intelligent way to a users request. [...]

    2 Comprehensive Overview: What is a conversational agent?
    A conversational agent can be defined in many ways, but for this paper we will define a conversational agent as an interactive program that will be able to interact, through typed text, with a user. [...] In this paper I will focus only on the details of the agent, namely how the agent will work with an intermediate logic form [...], how do reason with said logic form and create a response again in the same kind of intermediate logic form. [...] description of the major parts to this kind of conversational agent [...].

    2.1 Natural Language Understanding
    [...]

    2.1.1 [Natural Language] Parsing
    [...] But just parsing the sentence is not enough, since not only are a great many sentences ambiguous, they also make references to earlier spoken objects/entities as well as taking advantage of the situation or context to make assumptions about certain interpretations of words or phrases. [...]
    [...] in most cases a sentence like this would not make sense unless in some context, where "it" refers to something talked about earlier in the conversation. In this case, we then need to have some kind of reasoning about which previously introduced entity that "it" could refer to. This is called coreferencing.

    2.1.2 Coreferencing
    Many people have talked about corefencing, [Allen 95 [Natural Language Understanding], Hobbs 77] among others, and each have proposed different methods in order to achieve this. In order to rightly pick whichever entity is being referred to directly or indirectly we need to keep a list of previously introduced entities and have some information about each in order to figure out which if these best t into the current sentence. [...]
    [...]
    However, once we have figured out what "it" refers to, we are not done yet. We need to come up with some good representation of this sentence so that the computer has an easier way to deal with it. For this, we've come up with an intermediate representation, called the logic form.

    2.1.3 Logic Form
    The logic form is an intermediate representation of user input where we represent the input in a form better suited for computers with thoughts of going towards some kind of knowledge representation. [...]

    2.1.4 World Knowledge
    [Allen 95, pg. 465] says that "Language cannot be understood without considering the everyday knowledge that all speakers share about the world", and I wholeheartedly agree. In order to understand what is being said, the agent must have some kind of basic world knowledge. [...] If an agent should have all the common knowledge that a user has about how the world works, then its knowledge base would be huge. (Actually, some people have indeed been working towards this goal, in particular [Lenat 95] and his CYC project comes to mind). Knowledge about the world is not enough, though, if we want the agent to be able to function. It should also have some ideas about what other people know and don't know. Also, if we want to be realistic, it can't know everything. Noone knows everything, and therefore some things we will just have to say we don't know or this is what we think it is. This is where belief comes in.

    2.1.5 Beliefs about world, others and self
    The agent needs beliefs. Not only about what it thinks other users (and other agents, if in a multi-agent environment) know or believe, but also it needs to know what it itself believes. As pointed out above, noone knows everything but most people have opinions and beliefs about how they think the world works regardless of whether that is true. [...] It would not be practical to put all beliefs together [...]. If it did that, every time it had to access some information on what it believes about John, for example, it would have to go through all its beliefs about everyone. Therefore, we can partition the beliefs into different categories for each person, say, and itself. These partitions are what are referred to as belief spaces in [Allen 95].
    [...]
    However, there is no easy way to figure this out using only beliefs and world knowledge unless you add a lot of rules [...]. We would need millions of rules even for the simplest things. Something else is needed.

    2.1.6 Reasoning about others intentions/plans
    This something else is reasoning and plan inference. Once the agent has succesfully converted the input into logic form, resolved all coreferencing and clearly understood the sentence, it now needs to be able to figure out what the user expects for an answer. In order to do this, the agent needs to figure out what the user wants. This is not an easy thing to do. For the agent to figure out what the user wants, it needs to figure out the users plans or goals. Finally from there, the agent can then figure out how it can best help the person. [...]
    [...]

    2.1.7 Responding Intelligently
    [...]
    In order to make an intelligent response, we first need to deduce the goal(s) of the user. At that point, we can then decide what kind of information is needed. First we make use of the beliefs about the user to decide which subgoals (s)he already has reached. Then we look at the remaining subgoals and identify what is needed in order to reach them (ie: look at the prerequisites for those subgoals). Finally, using the beliefs about the user, we identify potential places where the user needs help and that is the information we respond with. [...]

    [...]"

    Comment
    First of all, we note the lack of

  • Multimodal User Interface (MUI or MMUI).

    Furthermore, we note that the Dialogue Manager (DM or DiaM) of a Conversational System (CS or ConS) becomes an agent respectively the Dialogue Management System (DMS) of a CMS becomes an Agent-Based System (ABS) more and more, or being more precise, a CMS gets more and more features of an Intelligent Agent System (IAS), such as

  • Natural Language Understanding (NLU),
  • (intermediate) logic form,
  • coreferencing,
  • reasoning, inferencing of intentions, goals, and plans of a user,
  • goal handling,
  • plan-based reasoning,
  • responding intelligently,
  • etc.

    to cure the deficits of a chatbot and a Dialogue System (DS or DiaS).
    Correspondingly, the resulting system is an Intelligent Dialog System (IDS or IDiaS) with an Intelligent Dialogue Manager (IDM or IDiaM) or Intelligent Dialogue Management System (IDMS or IDiaMS) and is also called a Conversational System (CS or ConS) with a Conversational Manager (CM or ConM) or Conversational Management System (CMS or ConMS), or a Conversational Agent System (CAS or ConAS).
    In the moment, the DiaM of a ConS has also own intentions or goals respectively plans respectively is goal-oriented or even rational, it becomes an Intelligent Agent System (IAS) (e.g. Belief-Desire-Intention (BDI) agent paradigm, architecture, or model) or even a Cognitive System (CS or CogS) (e.g. H-CogAff architecture and Emotion Machine architecture), or a Cognitive Agent System (CAS or CogAS) based on for example an integrated connections model or integrated ANNM (e.g. Distributed Artificial Neural Network (DANN) model DIstributed SCript processing and Episodic memorRy Network (DISCERN)).
    And this is the case with what is wrongly called modern chatbot and virtual assistant based on our coherent Ontologic Model (OM), including

  • Foundational Model (FM), including
    • Foundation Model (FM),
  • Language Model (LM),
  • Machine Learning (ML) Model (MLM),
  • Artificial Neural Network (ANN) Model (ANNM), including
    • Artificial Neural Network (ANN) Language Model (LM) (ANNLM=, including
      • Large Language Model (LLM),

    because their LLM is a Cognitive Model (CM), which also integrates a DiaS with a DiaM. We simply call it Ontologic roBot (OntoBot). Solar system wide and beyond.

    We also quote an online encyclopedia about the subject chatbot or chatterbot, which was publicized on the 18th of November 2002 (first version): "A Chatterbot is a program which attempts to maintain a conversation with a person. While it is true that a good understanding of a conversation is required to carry on a meaningful dialogue, most chatterbots do not attempt this. Instead they attempt to pick up cue words or phrases from the person which will allow them to use pre-prepared or pre-calculated responses which can move the conversation on in an apparently meaningful way without requiring them to know what they are talking about.
    The classic early chatterbots are ELIZA, PARRY and SHRDLU. More recent programs are Racter, or A.L.I.C.E.."

    Comment
    So short, so far, so good.

    We also quote an online encyclopedia about the subject chatbot or chatterbot, which was publicized on the 29th of December 2005: "A chatterbot (also chatbot, chatterbox) is a bot program which attempts to maintain a conversation with a person.

    Method of operation
    A good understanding of a conversation is required to carry on a meaningful dialog but most chatterbots do not attempt this. Instead they attempt to pick up cue words or phrases from the person which will allow them to use pre-prepared or pre-calculated responses which can move the conversation on in an apparently meaningful way without requiring them to know what they are talking about. One exception to this approach is Jabberwacky which attempts to model the way humans learn new facts and language.

    Early chatterbots
    The classic early chatterbots are ELIZA and PARRY. More recent programs are Racter and A.L.I.C.E.. A collection of games and functional features accessed via natural language processing allows ELLA to further extend the potential of chatterbots.

    Advanced chatterbots
    Advanced chatterbots like Jabberwock [(2001, 2003)] are combining several linguistic and Artificial intelligence methods like Fuzzy logic, Context-free grammar algorithm and Markov chain together with a Semantic network associative semantic matrices and classic Supervised learning [(not related to Artificial Neural Network (ANN))] and Pattern recognition. Using this techniques the chatterbot is able to follow the subject of the conversation and to generate most of the replies on the fly instead of looking for a best match in a database. [(line break added)]
    The multi-faceted approach[Multifaceted Conversational Systems. 2005] was pioneered by Albert [O]ne [and "presented at a colloquium on conversational systems in November 2005, [and] involves multiple chatbots working under the control of a master control program"].

    [...]

    Chatterbots in modern AI
    Modern AI research focuses on practical engineering tasks. This is known as weak AI and is distiniguished from strong AI, which would have sapience and reasoning abilities.
    There are several fields of AI, one of which is natural language. Many weak AI fields have specialised software or programming languages created for them. For example, one of the 'most-human' natural language chatterbots, A.L.I.C.E., uses a programming language AIML that is specific to its program, and the various clones, named Alicebots. Nevertheless, A.L.I.C.E. is still based on pattern matching without any reasoning. This is the same technique Eliza, the first chatterbot, was using back in 1966. Jabberwacky is a little closer to strong AI, since it learns how to converse from the ground up based solely on user interactions. In spite of that, the result is still very poor, and it is reasonable to state that there is actually no general purpose conversational artificial intelligence.

    [...]"

    Comments
    We note

  • weak AI versus strong AI.

    See also the quote of the document titled "The Beast Can Talk" (How Jabberwock Works), which is about the development and method of operation of the chatbot and virtual character Jabberwock, and the related comment below.

    We also note that the chatbot Albert One is based on natural language programming, but not on ontology (see also our comment to natural language programming below).

    Evoos has Graph-Based Knowledge Base (GBKB) or Knowledge Graph (KG), ontology, Computational Intelligence (CI) (Fuzzy Logic (FL), Artificial Neural Network (ANN), Probabilistic Model (PM or ProM), and Genetic Algorithms (GA)), Genetic Programming (GP), evolutionary, generative, and creative Bionics, Multi-Agent System (MAS), Holonic Agent System (HAS), Cognitive Agent System (CogAS), Multinodal User Interface (MUI), and so on.

    We also quote an online encyclopedia about the subject chatbot or chatterbot, which was publicized on the 31 December 2006: "A chatterbot is a computer program designed to simulate an intelligent conversation with one or more human users via auditory or textual methods. Though many appear to be intelligently interpreting the human input prior to providing a response, most chatterbots simply scan for keywords within the input and pull a reply with the most matching keywords or the most similar wording pattern from a local database. [...]

    Method of operation
    [...]
    [...] Some programs that use natural language conversation, such as SHRDLU, are not generally classified as chatterbots because they link their speech ability to knowledge of a simulated world. This type of link requires a more complex artificial intelligence (eg., a "vision" visual and spatial system) than standard chatterbots have.

    [...]

    Chatterbots in modern AI
    [...]
    [...] it is reasonable to state that there is actually no general purpose conversational artificial intelligence. This has lead some software developers to focus more on the practical aspect of chatterbot technology - information retrieval.
    [...]

    External links

  • [...]
  • Incognita - Artificial Intelligence Conversationalist
  • [...]"

    Comment

    The quote of an online encyclopedia about the subject SHRDLU and a related comment are given below.

    Some programs have been removed from the description, because they are not classified as a chatbot, but do belong to other fields, such as interactive fiction, virtual character, virtual assistant, and Intelligent Agent-System (IAS).

    That was all, which existed before we publicized our Evoos and our OS.

    We also quote an online encyclopedia about the subject chatbot or chatterbot, which was publicized on the (actual version): "A chatbot [...] is designed to [emulate,] mimic[, or simulate] human conversation [...]. [...]
    [...]
    A major area where chatbots have long been used is in customer service and support, with various sorts of virtual assistants.[9 [cai.tools.sap/blog/: 2017 Messenger Bot Landscape, a Public Spreadsheet Gathering 1000+ Messenger Bots. 3rd of May 2017] [...]
    As chatbots work by predicting responses rather than knowing the meaning of their responses, this means they can produce coherent-sounding but inaccurate or fabricated content, referred to as 'hallucinations'. [...]

    Background
    [...]
    ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of clue words or phrases in the input, and the output of the corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way [...].[13] Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent".
    Interface designers have come to appreciate that humans' readiness to interpret computer output as genuinely conversational - even when it is actually based on rather simple pattern-matching - can be exploited for useful purposes. Most people prefer to engage with programs that are human-like, and this gives chatbot-style techniques a potentially useful role in interactive systems that need to elicit information from users, as long as that information is relatively straightforward and falls into predictable categories. [...]

    Development
    Among the most notable early chatbots are ELIZA (1966) and PARRY (1972).[14][15][16][17] More recent notable programs include A.L.I.C.E. [1995], Jabberwacky [1988] and D.U.D.E ([...] 2006). [...]
    From 1978[19] to some time after 1983,[20] the CYRUS project [...] constructed a chatbot simulating Cyrus Vance (57th United States Secretary of State). It used case-based reasoning, and updated its database daily by parsing wire news from United Press International. The program was unable to process the news items subsequent to the surprise resignation of Cyrus Vance in April 1980, and the team constructed another chatbot simulating his successor, Edmund Muskie.[21][20]
    One pertinent field of AI research is natural-language processing. Usually, weak AI fields employ specialized software or programming languages created specifically for the narrow function required. For example, A.L.I.C.E. uses a markup language called AIML,[3] which is specific to its function as a conversational agent, and has since been adopted by various other developers of, so-called, Alicebots. Nevertheless, A.L.I.C.E. is still purely based on pattern matching techniques without any reasoning capabilities, the same technique ELIZA was using back in 1966. This is not strong AI, which would require sapience and logical reasoning abilities.
    Jabberwacky learns new responses and context based on real-time user interactions, rather than being driven from a static database. Some more recent chatbots also combine real-time learning with evolutionary algorithms that optimize their ability to communicate based on each conversation held. Still, there is currently no general purpose conversational artificial intelligence, and some software developers focus on the practical aspect, information retrieval.
    [...]
    Chatbots may use artificial neural networks as a language model. For example, generative pre-trained transformers (GPT), which use the transformer architecture, have become common to build sophisticated chatbots. The "pre-training" in its name refers to the initial training process on a large text corpus, which provides a solid foundation for the model to perform well on downstream tasks with limited amounts of task-specific data. [...]
    [...]

    Applications
    See also: Virtual assistant
    [...]

    [...]

    Limitations of chatbots
    The creation and implementation of chatbots is still a developing area, heavily related to artificial intelligence and machine learning, so the provided solutions, while possessing obvious advantages, have some important limitations in terms of functionalities and use cases. However, this is changing over time.
    The most common limitations are listed below:[94]

  • As the input/output database is fixed and limited, chatbots can fail while dealing with an unsaved query.[59]
  • A chatbot's efficiency highly depends on [natural] language processing and is limited because of irregularities, such as accents and mistakes.
  • Chatbots are unable to deal with multiple questions at the same time and so conversation [simulation] opportunities are limited.[94]
  • Chatbots Artificial Neural Network (ANN) Language Models (LMs) (ANNLMs) require a large amount of conversational data to train. Generative [Artificial Intelligence ANN] models, which are based on deep learning algorithms to generate new responses word by word based on user input, are usually trained on a large dataset of natural-language phrases.[3]
  • Chatbots have difficulty managing non-linear conversations that must go back and forth on a topic with a user.[95]
  • As it happens usually with technology-led changes in existing services, some consumers, more often than not from older generations, are uncomfortable with chatbots due to their limited understanding, making it obvious that their requests are being dealt with by machines.[94]

    [...]"

    Comment
    We note

  • chatbot has fixed and limited input/output database,
  • chatbot is a program for a specific topic of a specific domain and therefore no Dialog System (DS or DiaS),
  • difference between chatbot and dialog system, and
  • no Natural Language Understanding (NLU) and therefore no Conversational System (CS or ConS), including Conversational User Interface (CUI) and Conversational Agent System (CAS).

    A ConS includes utterance interpreter, Dialogue Manager (DM) or Dialogue Management System (DMS), and response generator.
    understanding of queries, reasoning

    What is called Conversational Artificial Intelligence does already exist since many years with the related essential parts of our

  • Evoos with our coherent Ontologic Model (OM), and Ontologic Computing (OC), including evolutionary, generative, and creative Bionics, and much more, and
  • OS with our Ontologic roBot (OB or OntoBot) and much more,

    which by the way do exist since at least 1999 and 2006.

    We also quote an online encyclopedia about the subject ELIZA: "ELIZA is an early natural language processing computer program [...]. Created to explore communication between humans and machines, ELIZA simulated conversation by using a pattern matching and substitution methodology that gave users an illusion of understanding on the part of the program, but had no representation that could be considered really understanding what was being said by either party. [...] the pattern matching directives that contained most of its language capability were provided in separate "scripts" [...].[8] The most famous script [...] used rules, dictated in the script, to respond with non-directional questions to user inputs. As such, ELIZA was one of the first chatterbots ("chatbot" modernly) [...].
    [...]

    [...]

    Design
    Weizenbaum originally wrote ELIZA [...] as a program to make [the simulation of] natural-language conversation [or natural-language interaction] possible with a computer.[25] To accomplish this, Weizenbaum identified five "fundamental technical problems" for ELIZA to overcome: the identification of key words, the discovery of a minimal context, the choice of appropriate transformations, the generation of responses in the absence of key words, and the provision of an editing capability for ELIZA scripts.[19] Weizenbaum solved these problems and made ELIZA such that it had no built-in contextual framework or universe of discourse.[18] However, this required ELIZA to have a script of instructions on how to respond to inputs from users.[6]
    ELIZA starts its process of responding to an input by a user by first examining the text input for a "keyword".[...]
    Following the first examination, the next step of the process is to apply an appropriate transformation rule, which includes two parts: the "decomposition rule" and the "reassembly rule".[...]
    The decomposition rule then designates a particular reassembly rule, or set of reassembly rules, to follow when reconstructing the sentence.[5] The reassembly rule takes the fragments of the input that the decomposition rule had created, rearranges them, and adds in programmed words to create a response [text output]. [...]
    These steps represent the bulk of the procedures that ELIZA follows in order to create a response from a typical input, though there are several specialized situations that ELIZA/DOCTOR can respond to. One Weizenbaum specifically wrote about was when there is no keyword. One solution was to have ELIZA respond with a remark that lacked content [...].[19] The second method was to use a "MEMORY" structure, which recorded prior recent inputs, and would use these inputs to create a response referencing a part of the earlier conversation when encountered with no keywords.[26] This was possible due to [the programming language]'s ability to tag words for other usage, which simultaneously allowed ELIZA to examine, store, and repurpose words for usage in outputs.[19]
    While these functions were all framed in ELIZA's programming, the exact manner by which the program dismantled, examined, and reassembled inputs is determined by the operating script. [...]
    [...]"

    Comment
    We note that one of the first attempts at simulating a human conversation

  • simulation and illusion of understanding,
  • (transformation) rule-based (decomposition and reassembly rules),
  • (operating) script-based,
  • no Natural Language Understanding (NLU), and
  • no Dialog System (DS or DiaS), comprising Dialogue Management System (DMS).

    We quote the document titled "The Beast Can Talk" How Jabberwock Works: "[...]
    But Jabberwock wasn't designed to win a Turing Test or to prove to be as intelligent as a human. The roots and the goals of Jabberwock are different.
    The main idea of Jabberwock was to create a virtual character. [...] Jabberwock was part of a journalistic research we did in 2001 to publish an issue about AI and robots [...]. We learnt about talking machines and so called chatterbots. We learnt about their ancestors, especially Interactive Fiction stories and MUDs, and we learnt that there is no artificial intelligence in chatterbots. And we learnt that it's pretty easy to develop a chatterbot by oneself. Every person who is able to imagine and to write funny dialogs can do it. You don't have to be a programmer but an author. [...]
    [...]

    How Does Jabberwock Work
    [...]
    Jabberwock's engine consists of two main parts: the Parser and the Brain Files.
    The Parser consists of the Website Interface, the Preprocessor, the Pattern Matcher, and the Response Generator. The Parser is [...] an asynchronic multi-user web-server [...].
    [...] There are no hard coded rules. A simple set of control characters is used to tell the Parser how to deal with the Templates. [...]
    The Brain Files are a set of flat text files, containing lists of words and phrases and the Templates, which are used to match the user's input, called a Query, and to generate a Response.

    1) Preprocessor
    Jabberwock's engine consists of several preprocessor parts:
    * a basic Spell Checker [...]
    * a Nonsense Checker [...]
    * a basic Grammar Checker [...]
    * a Normalizer [...]
    * [...] client side Input Validator [...]

    2) Pattern Matcher
    Jabberwock's engine is first of all a pattern patcher. In general there is not much difference to the simple old ELIZA method. But unlike some other pattern matchers Jabberwock is NOT using the user's query and searchs if there is an exact match (or parts of it) in a database of already recognized user queries or canned responses.
    [...]
    The main parts of the pattern matching engine are Templates, Containers, Wildcards, Arrays, a Semantic Matrix, and the Subject Of Conversation.

    Templates
    A template is a model of a sentence, or a set of phrases, or a single phrase, or a set of words or a single word based on rules of Context-Free Grammar (CFG). [...]
    [...]

    [...]

    Associative Semantic Matrices
    You also can check for different form of words (think, thought, thinking), and you can connect containers and arrays of cognitional words of the same linguistic family [...].
    This connected groups of words and phrases are used as a semantic matrix. It helps to get the meaning of a sentence and the Subject Of Conversation, and in general to manage and drive conversation threads.

    Subject Of Conversation
    Perhaps the most important part of a conversation is to get the meaning of a sentence, and to get the subject of conversation. [...]
    The engine uses the associative semantic matrices to get and to associate the topic during a conversation [...].
    [...]
    So it can be detected if the user is introducing another topic or e.g. changes the subject each query. [...]

    3) Scripted Dialogs
    The basic technics of doing scripted dialogs in chatterbots are quite simple. They are at first introduced in Interactive Fiction, Hypertext Fiction and Interactive Storytelling.
    [...]

    4) Response Generator
    [...]

    5) Moods
    [...]

    6) Supervised Learning
    [...]
    If errors are detected, or if dialogs are flawed then the author comes in play. The author of Jabberwock is setting up the word lists, is building up and adding new templates, is adjusting the keywords, and is writing the scripted dialogs.
    This is not an automatic process. It's done by hand. It's a writing process. If Jabberwock failed to give a smart or witty reply, then not the engine failed but the author. The engine is just a tool to help the author to get the subject of conversation and to write the best templates of useful, meaningful or funny replies.
    And because a huge part of Jabberwock's brain file is build by using Scripted Dialog sequences you can compare this writing process with an author writing comedy short stories or soap operas based on funny dialogs. Now we would have to argue what "funny" means - but this would be a different story.

    7) Known Issues
    * Jabberwock has definitely NO knowledge about the real world. [...]
    Nevertheless Jabberwock is able to give a definition about EVERY given word - but it might not be the correct definition because it's just a random generated definition. In fact he goal is to fool the user and to make fun of the question [...]. [...]
    [...]
    * Jabberwock's engine doesn't use external database engines like MySQL but flat text files. [...]
    The used flat files are not as fast as a database engine, but so far we were able to outweigh this issue by using associative arrays and assynchronic tasks. [...]
    *Jabberwock is stupid. There is no reason to deny this fact. His vocabulary and his knowledge is bound to a limited domain [...]. There is no artificial intelligence in Jabberwock.
    * Jabberwock was not created to pass the Turing Test.
    [...]
    * The development of Jabberwock was stopped in 2005.
    [...]"

    Comment
    Jabberwock won with a simple Dialog System with simple associative semantic matrices. We only understood at the same time how far ahead we already were on a completely different level with our Evoos.

    We also quote an online encyclopedia about the subject virtual assistant: "A virtual assistant (VA) is a software agent that can perform a range of tasks or services for a user based on user input such as commands or questions, including verbal ones. Such technologies often incorporate chatbot capabilities to simulate human conversation, such as via online chat, to facilitate interaction with their users. The interaction may be via text, graphical interface, or voice - as some virtual assistants are able to interpret human speech and respond via synthesized voices.
    [...]
    Recently, the emergence of recent artificial intelligence based chatbots, such as ChatGPT, has brought increased capability and interest to the field of virtual assistant products and services.[4][5][6]

    History
    Experimental decades: 1910s-1980s
    [...]
    Another early tool which was enabled to perform digital speech recognition was the IBM Shoebox voice-activated calculator, presented to the general public during the 1962 Seattle World's Fair after its initial market launch in 1961. This early computer, developed almost 20 years before the introduction of the first IBM Personal Computer in 1981, was able to recognize 16 spoken words and the digits 0 to 9.
    The first natural language processing computer program or the chatbot ELIZA was developed [... 1964 to 1967] [...] ELIZA used pattern matching and substitution methodology into scripted responses to simulate conversation, which gave an illusion of understanding on the part of the program.
    [...]
    In 1986 Tangora was an upgrade of the Shoebox, it was a voice recognizing typewriter. Named after the world's fastest typist at the time, it had a vocabulary of 20,000 words and used prediction to decide the most likely result based on what was said in the past. IBM's approach was based on a hidden Markov model, which adds statistics to digital signal processing techniques. The method makes it possible to predict the most likely phonemes to follow a given phoneme.
    [...]

    Birth of smart virtual assistants: 1990s-2010s
    See also: Weak artificial intelligence and Speech recognition
    In the 1990s, digital speech recognition technology became a feature of the personal computer with IBM, Philips and Lernout & Hauspie fighting for customers. Much later the market launch of the first smartphone IBM Simon in 1994 laid the foundation for smart virtual assistants as we know them today.[citation needed]
    [...]
    The first modern digital virtual assistant installed on a smartphone was Siri [...]
    In November 2014, Amazon announced Alexa alongside the Echo.[16]
    In April 2017 Amazon released a service for building conversational interfaces for any type of virtual assistant or interface.

    Artificial intelligence and language models: 2020s-present
    See also: History of artificial intelligence, Generative pre-trained transformer, Natural language generation, and Language model
    In the 2020s, artificial intelligence (AI) systems like ChatGPT have gained popularity for their ability to generate human-like responses to text-based conversations. In February 2020, Microsoft introduced its Turing Natural Language Generation (T-NLG), which was then the "largest language model ever published at 17 billion parameters."[17 [Web Semantics: Microsoft Project Turing introduces Turing Natural Language Generation (T-NLG). [13th of February 2020]]] On November 30, 2022, ChatGPT was launched as a prototype and quickly garnered attention for its detailed responses and articulate answers across many domains of knowledge. [...] In February 2023, Google began introducing an experimental service called "Bard" which is based on its LaMDA program to generate text responses to questions asked based on information gathered from the web.
    While ChatGPT and other generalized chatbots based on the latest generative AI are capable of performing various tasks associated with virtual assistants, there are also more specialized forms of such technology that are designed to target more specific situations or needs.[18][4]

    Method of interaction
    [...]
    Many virtual assistants are accessible via multiple methods, offering versatility in how users can interact with them, whether through chat, voice commands, or other integrated technologies.
    Virtual assistants use natural language processing (NLP) to match user text or voice input to executable commands. Some continually learn using artificial intelligence techniques including machine learning and ambient intelligence.

    [...]

    Developer platforms
    [...]
    Previous generations
    In previous generations of text chat-based virtual assistants, the assistant was often represented by an avatar (a.k.a. interactive online character or automated character) - this was known as an embodied agent.

    [...]"

    Comment

    We also quote an online encyclopedia about the subject SHRDLU: "SHRDLU is an early natural-language understanding computer program that was developed by Terry Winograd at MIT in 1968-1970. In the program, the user carries on a conversation with the computer, moving objects, naming collections and querying the state of a simplified "blocks world", essentially a virtual box filled with different blocks.[1]
    [...]

    Functionality
    SHRDLU is primarily a language parser that allows user interaction using English terms. The user instructs SHRDLU to move various objects around in the "blocks world" containing various basic objects: blocks, cones, balls, etc. What made SHRDLU unique was the combination of four simple ideas that added up to make the simulation of "understanding" far more convincing.
    One was that SHRDLU's world is so simple that the entire set of objects and locations could be described by including as few as perhaps 50 words: nouns like "block" and "cone", verbs like "place on" and "move to", and adjectives like "big" and "blue". The possible combinations of these basic language building blocks are quite simple, and the program is fairly adept at figuring out what the user means.
    SHRDLU also includes a basic memory to supply context. One could ask SHRDLU to "put the green cone on the red block" and then "take the cone off"; "the cone" would be taken to mean the green cone one had just talked about. SHRDLU can search back further through the interactions to find the proper context in most cases when additional adjectives were supplied. One could also ask questions about the history, for instance one could ask "did you pick up anything before the cone?"
    A side effect of this memory, and the original rules SHRDLU was supplied with, is that the program can answer questions about what was possible in the world and what was not. For instance, SHRDLU can deduce that blocks could be stacked by looking for examples, but also realize that triangles couldn't be stacked, after having tried it. The "world" contains basic physics to make blocks fall over, independent of the language parser.
    Finally, SHRDLU can also remember names given to objects, or arrangements of them. For instance one could say "a steeple is a small triangle on top of a tall rectangle"; SHRDLU can then answer questions about steeples in the blocks world, and build new ones.
    [...]

    [...]

    Consequences
    SHRDLU was considered a tremendously successful demonstration of artificial intelligence (AI). This led other AI researchers to excessive optimism which was soon lost when later systems attempted to deal with situations with a more realistic level of ambiguity and complexity[citation needed]. Subsequent efforts of the SHRDLU type, such as Cyc, have tended to focus on providing the program with considerably more information from which it can draw conclusions.
    [...]"

    Comment
    Despite the title "Procedures as a Representation for Data in a Computer Program for Understanding Natural Language" (1971) SHRDLU is not Natural Language Understanding (NLU), but only Natural Language Processing (NLP) (Natural Language Parsing (NLP)) and a simulation of understanding. In fact, what is called deduction and realization is already included in the very small controlled language and the related program rules and is about rule-based processing of the data structure in relation to geometric objects, but it is not about reasoning in relation to NLU and conversation.
    Eventually, "Winograd has distanced himself from SHRDLU and the field of AI, believing SHRDLU a research dead end." We would call it an interesting research start.

    We also quote an online encyclopedia about the subject dual process theory: "In psychology, a dual process theory provides an account of how thought can arise in two different ways, or as a result of two different processes. Often, the two processes consist of an implicit (automatic), unconscious process and an explicit (controlled), conscious process. Verbalized explicit processes or attitudes and actions may change with persuasion or education; though implicit process or attitudes usually take a long amount of time to change with the forming of new habits. [...]"

    Comment
    A Cognitive System (CS or CogS), including Cognitive Agent System (CAS or CogAS) has

  • short-term memory and
  • long-term memory, including
    • implicit memory or non-declarative memory, including
      • procedural memory, and
      • emotional conditioning,

      and

    • explicit memory or declarative memory (can be described in words), including
      • semantic memory (general world knowledge independent of personal experience),
      • associative memory,
    • hybrid memory, including
      • autobiographical memory, and
      • spatial memory.

    What is wrongly called dual process chatbot is the integration of the fields of CogAS and ConAS.

    In the following, we quote an online encyclopedia and some few reports about related plagiarisms and fakes of essential parts of our integrating OS Architecture (OSA) and OS Components (OSC), specifically our OntoBot, and also OntoSearch and OntoFind, to give addtional informations, that are relevant in the concluding section.

    We quote an online encyclopedia about the subject Microsoft Copilot: "Microsoft Copilot is a generative artificial intelligence chatbot developed by Microsoft. Based on a large language model, it was launched in February 2023 as Microsoft's primary replacement for the discontinued Cortana.
    [...]

    [...]

    History
    As Bing Chat
    On February 7, 2023, Microsoft began rolling out a major overhaul to Bing, called the new Bing.[20] A chatbot feature, at the time known as Bing Chat, had been developed by Microsoft and was released in Bing and Edge as part of this overhaul. According to Microsoft, one million people joined its waitlist within a span of 48 hours.[21] Bing Chat was available only to users of Microsoft Edge and Bing mobile app, and Microsoft claimed that waitlisted users would be prioritized if they set Edge and Bing as their defaults, and installed the Bing mobile app.[22]
    When Microsoft demoed Bing Chat to journalists, it produced several hallucinations, including when asked to summarize financial reports.[23] The new Bing was criticized in February 2023 for being more argumentative than ChatGPT, sometimes to an unintentionally humorous extent.[24][25] The chat interface proved vulnerable to prompt injection attacks [...]
    [...]
    In March 2023, Bing incorporated Image Creator, an AI image generator powered by OpenAI's DALL-E 2, which can be accessed either through the chat function or a standalone image-generating website.[34] [...]
    [...]

    Services
    [...]

    Languages [Multilingual]
    Copilot is able to communicate in numerous languages and dialects.[48][69] [...]

    Technology
    Copilot utilizes the Microsoft Prometheus model. According to Microsoft, this uses a component called the Orchestrator, which iteratively generates search queries, to combine the Bing search index and results[72 [Building the New Bing. [21th 2023-02-21]]] with OpenAI's GPT-4,[73 [Microsoft's new Bing was using GPT-4 all along. [14th of March 2023]]][74] [...] foundational large language models, which have been fine-tuned using both supervised and reinforcement learning techniques.

    Windows
    Microsoft Copilot in Windows supports the use of voice commands. By default, it is accessible via the Windows taskbar.[77 [Hands On With Microsoft Copilot in Windows 11, Your Latest AI Assistant. [30th of September 2023]]] Copilot in Windows is also able to provide information on the website currently being browsed by a user in Microsoft Edge.[78]

    [...]

    Microsoft 365
    Copilot can be used to rewrite and generate text based on user prompts in Microsoft 365 services, including Microsoft Word, Microsoft Excel, and PowerPoint.[48 [Microsoft's new Copilot will change Office documents forever. [17th of March 2023]]][81] According to [...] the head of Microsoft 365, Copilot for Microsoft 365 uses Microsoft Graph, an API, to evaluate context and available Microsoft 365 user data before modifying and sending user prompts to the language model.[82 [Microsoft announces Copilot: the AI-powered future of Office documents. [15th of March 2023]]] After receiving its output, Microsoft Graph performs additional context-specific processing before sending the response to Microsoft 365 apps to generate content.[82]
    [...]"

    Comment
    We note

  • foundational large language models, foundational models, but not foundation models, and
  • context and context-specific processing.

    We quote an online encyclopedia about the subject Perplexity AI, which was publicized on the 7th of January 2024 (first version): "Perplexity.ai is an AI-powered search engine that provides direct answers to queries using natural language predictive text.[1] Touted as "an alternative to traditional search engines," Perplexity generates answers using curated sources from the web and cites links within the text response. The Perplexity model is based on [OpenAI] GPT-3.5, although "Pro" users have access to [OpenAI] GPT-4 and [Anthropic] Claude 2.[2]
    [...] [3 [WSJ News Exclusive [-] Jeff Bezos Bets on a Google Challenger Using AI to Try to Upend Internet Search. [4th of January 2024]]]

    References
    [...]
    2. "What model does Perplexity use and what is the Perplexity model?". blog.perplexity.ai.
    3. [Wall Street] Journal, Miles Kruppa [...] (2024-01-04). "WSJ News Exclusive [] Jeff Bezos Bets on a Google Challenger Using AI to Try to Upend Internet Search". WSJ.
    4. Schwartz, Eric Hal (2023-11-30). "Perplexity Debuts 2 Online LLMs with With Real-Time Knowledge". Voicebot.ai.
    [...]
    7. "Report: Generative AI search startup Perplexity AI seeking millions in venture capital funding". [...]. 2023-10-24."

    We quote an online encyclopedia about the subject Perplexity AI, which was publicized on the 28th of July 2024 (actual version): "Perplexity AI is an AI chatbot-powered research and conversational search engine that answers queries using natural language predictive text. Launched in 2022, Perplexity generates answers using sources from the web and cites links within the text response.[2] Perplexity works on a freemium model; the free product uses the company's standalone large language model (LLM) that incorporates natural language processing (NLP) capabilities, while the paid version Perplexity Pro has access to [OpenAI] GPT-4, [Anthropic] Claude 3.5, Mistral Large, [Meta (Facebook)] Llama 3[, Alphabet (Google) Gemini] and an Experimental Perplexity Model.[3][2 [Perplexity AI raises $73.6M in funding round led by Nvidia, Bezos, now valued at $522M. [6th of January 2024]] [(quoted below)]][1] [...]

    [...]

    Functionality
    Perplexity's main product is its search engine, which relies on natural language processing.[6] It utilizes the context of the user queries to provide a personalized search result. Perplexity summarizes the search results and produces a text with inline citations.[6] it allows users to ask follow-up questions that are interpreted in the same context.
    Perplexity uses a freemium model and provides basic search functionalities and all search modes for free. The 'Focus' feature allows users to restrict the search to Reddit, YouTube, [illegal] WolframAlpha [online encyclopedia: "an answer engine [] offered as an online service that answers factual queries by computing answers from externally sourced data"] or to restrict it to Academic (for searching research papers) or to Writing (Disables internet access for the LLM).[8]
    Perplexity's paid variant, the "Pro" mode (formerly Copilot), asks the user clarifying questions to refine queries. It enables users to upload and analyze local files, including images, alongside generating images using AI. Additionally, it provides access to an API.[6] Perplexity launched a new enterprise version of its product in April 2024.[1] In May 2024, Perplexity launched a new feature called Pages, which generates a customizable webpage based on user prompts. Pages utilizes Perplexity's AI search models to gather information and create a research presentation that can be published and shared with others.[9]

    Use of content from media outlets
    [...]
    Amazon Web Services, which hosts the Perplexity crawler, has a terms of service clause prohibiting its users from ignoring the robots.txt standard. Amazon began a "routine" investigation into the company's usage of Amazon Elastic Compute Cloud.[15]"

    Comment
    We note

  • ...
  • LLM incorporates NLP capabilities and
  • response generator.
    The details of the technology, application, and service are irrelevant, because our integrating OS Architecture (OSA), and OntoBot, OntoSearch and OntoFind OSComponents (OSC) have been taken as source of inspiration and blueprint without referencing, authorization, and licensing, and also in a way, which interferes with, and also obstructs, undermines, and harms the exclusive moral rights respectively Lanham (Trademark) rights (e.g. exploitation (e.g. commercialization (e.g. monetization))) of C.S. and our corporation.
    But this is not the only legal issue.

    We quote a first report, which is about investors in Perplexity AI and was publicized on the 6th of January 2024: "Perplexity AI raises $73.6M in funding round led by Nvidia, Bezos, now valued at $522M
    [...]
    Perplexity AI's CEO, Aravind Srinivas, has expressed confidence in the startup's approach, emphasising its ability to fine-tune various top-performing AI models rather than relying on a single one. This strategy, he believes, positions Perplexity AI as a next-generation search tool that will eventually be seen as a successor to legacy platforms like Google.
    "It removes the burden of prompt engineering and does not require users to ask perfectly phrased questions to get the answers they seek. This enables users to gain more relevant and comprehensive answers than other AI chatbots, traditional search engines, or research tools," Srinivas said in the blog released with the funding announcement.
    The company's innovative search engine operates through a chatbot-like interface, allowing users to ask questions in natural language. It offers a range of large language models (LLMs) for users to choose from, including OpenAI's GPT-4, Anthropic's Claude 2.1 [3.5], [Mistral Large, Meta (Facebook) Llama 3,] Google Gemini, or Perplexity's own [Experimental] Pro model. With a subscription fee of $20 per month, users can tailor their search experience to their preferences."

    Comment
    Interestingly, Amazon keeps its voice-based virtual assistant Alexa and this LLM search engine separated.
    We have seen this also with Apple and its virtual assistant Siri, which took the alternative strategy with Natural Language User Interface (NLUI), specifically Natural Language Search Engine (NLSE). But Siri is already based on a Cognitive Agent System (CAS), which in part is already based on our Evoos. Eventually, this combination or integration is part of our OntoBot with OntoSearch.
    Interestingly, Apple also keeps its Siri and that LLM chatbot separated.

    We quote an online encyclopedia about the subject SearchGPT, which was publicized on the 26th of July 2024, 04:00 (first version): "SearchGPT is a prototype aimed at enhancing AI search capabilities by integrating real-time web information. It provides quick, timely answers with clear source citations, and allows conversational follow-up questions. The prototype is designed to help users discover high-quality content from publishers and is currently being tested with a small group. Feedback from users and publishers will be used to refine the features for future integration into ChatGPT. Publishers can manage their appearance in search results without affecting their content's involvement in AI training. [1]
    Wiggers, Kyle (2024-07-25). "With Google in its sights, OpenAI unveils SearchGPT". [...]. Retrieved 2024-07-26."

    Comment
    Interestingly, OpenAI keeps the fields of chatbot and search engine separated, which suggests and supports our opinion that the white, yellow, or red line runs through this area and also other areas. conversational chat and conversational search

    We quote an online encyclopedia about the subject SearchGPT, which was publicized on the 26th of July 2024, 18:03: "SearchGPT is a prototype search engine developed by OpenAI, it launched on 26 July, 2024. It combines traditional search engine features with generative AI capabilities.[1][2] It aims to give users "fast and timely answers with clear and relevant sources."[3] According to [a] Journal, SearchGPT is "taking direct aim at Google."[4]
    SearchGPT was first introduced as a prototype in a limited release. There are plans to integrate it into ChatGPT.[5]

    See also

  • Perplexity.ai

    References
    1. Rogers, Reece. "SearchGPT Is OpenAI's Direct Assault on Google". [...]. Retrieved 2024-07-26.
    2. Wiggers, Kyle (2024-07-25). "With Google in its sights, OpenAI unveils SearchGPT". [...]. Retrieved 2024-07-26.
    3. Field, Hayden (2024-07-25). "OpenAI announces a search engine called SearchGPT; Alphabet shares dip". [...]. Retrieved 2024-07-26.
    4. Seetharaman, Deepa (July 25, 2024). "OpenAI Is Launching Search Engine, Taking Direct Aim at Google". [...].
    5. Robison, Kylie (2024-07-25). "OpenAI announces SearchGPT, its AI-powered search engine". [...]. Retrieved 2024-07-26."

    Comment

    We quote an online encyclopedia about the subject SearchGPT, which was publicized on the 28th of July 2024 (actual version): "SearchGPT is a prototype search engine developed by OpenAI, it launched on 26 July, 2024. It combines traditional search engine features with generative AI capabilities.[1][2] The search feature positions the company as a direct competitor to major search engines, notably Google and Bing, the latter being a product of OpenAI's largest investor, Microsoft.[3] SearchGPT was first introduced as a prototype in a limited release to 10,000 test users. OpenAI ultimately intends to incorporate the search features into ChatGPT.[4]
    OpenAI announced its partnership with publishers for SearchGPT, providing them with options on how their content appears in the search results and ensuring the promotion of trusted sources.[5]

    See also

  • Comparison of web search engines
  • Google search
  • List of search engines
  • Timeline of web search engines

    References
    1. Rogers, Reece. "SearchGPT Is OpenAI's Direct Assault on Google". [...]. Retrieved 2024-07-26.
    2. Wiggers, Kyle (2024-07-25). "With Google in its sights, OpenAI unveils SearchGPT". [...]. Retrieved 2024-07-26.
    3. [and 5.] Robins-Early, Nick (2024-07-25). "OpenAI tests new search engine called SearchGPT amid AI arms race". [...]. Retrieved 2024-07-28.
    4. Robison, Kylie (2024-07-25). "OpenAI announces SearchGPT, its AI-powered search engine". [...]. Retrieved 2024-07-26.
    5. [and 3.] Robins-Early, Nick (2024-07-25). "OpenAI tests new search engine called SearchGPT amid AI arms race". [...]. Retrieved 2024-07-28."

    Comment

    We quote a second report, which is about Alphabet (Google), OpenAI, You.com, Perplexity AI, and Microsoft Bing and was publicized on the 20th of January 2023: "[...]
    [...] several new companies, including You.com and Perplexity.ai, are already offering online search engines that let you ask questions through an online chatbot, much like ChatGPT. Microsoft is also working on a new version of its Bing search engine that would include similar technology, according to a report [...]."

    Comment

    We quote a third report, which is about OpenAI, ChatGPT, and SearchGPT, Alphabet Google, and Microsoft, and was publicized on the 25th of July 2024: "[...]
    OpenAI is testing an AI-powered search engine that can access information from across the internet in real time.
    [...] The company said it was testing the technology with a small group of users as well as online publishers who partnered with OpenAI to help build the search engine.
    "Getting answers on the web can take a lot of effort, often requiring multiple attempts to get relevant results," the company said in its blog post. "We believe that by enhancing the conversational capabilities of our models with real-time information from the web, finding what you're looking for can be faster and easier."
    [...]
    Other companies have built similar technologies, including [...] Google and Microsoft, as well as start-ups like [...] Perplexity [AI]. These services augment traditional internet search engines with chatbot technology that generates text as a way of answering questions and summarizing online information.
    OpenAI plans to integrate its new search engine technology with its existing online chatbot, ChatGPT [...]. The company said that its new technology would respond to questions with up-to-date information from the web while also providing links to relevant sources.
    [...]
    OpenAI said that online publishers [...] are partnering with the company on the product. "We are committed to a thriving ecosystem of publishers and creators," the company said, adding that the technology would highlight "high quality content in a conversational interface with multiple opportunities for users to engage."
    The company also said that it was developing ways for publishers to manage how they appear in answers generated by the new search engine.
    [...]"

    Comment

    We quote a fourth report, which is about OpenAI, ChatGPT, and SearchGPT, and Alphabet (Google), and was publicized on the 25th of July 2024: "OpenAI is taking on Google with a new artificial intelligence search engine
    The company is testing SearchGPT, which will combine its AI technology with real-time information from the web to allow people to search for information in the same way they talk to ChatGPT. While the search engine is currently in an early test for a limited number of users, OpenAI said it plans to integrate the tools into ChatGPT in the future.
    With the new old feature, OpenAI will be directly competing with Google, which has for years dominated the online search market but has scrambled to keep pace with the AI arms race that OpenAI kicked off when it launched ChatGPT in November 2022. SearchGPT could also pose a threat to Microsoft's Bing, the also-ran search engine player that last year incorporated OpenAI's own technology in an effort to better compete with Google.
    [...]
    With SearchGPT, users will be able to ask questions in natural language - the same way they talk with ChatGPT - and they'll receive answers that they can then follow up on with additional questions. But unlike ChatGPT, which is often reliant on older data to generate its answers, SearchGPT will provide up-to-date information, with online links to what the company says are "clear and relevant sources."
    [...]
    The tool will also show a sidebar with additional links to relevant information - not totally unlike the ten blue links users are used to seeing on Google Search results pages.
    "Getting answers on the web can take a lot of effort, often requiring multiple attempts to get relevant results," the company said in a blog post. "We believe that by enhancing the conversational capabilities of our models with real-time information from the web, finding what you're looking for can be faster and easier."
    The OpenAI search engine could cement generative AI - technology that can create original text, as well as other types of media - as the future of finding answers online, after Google and others have experimented with early efforts to incorporate chatbots and AI-generated answers into the search experience. But that future is not assured, given AI tools' propensity to confidently assert false information with no indication that it may be incorrect or misleading.
    OpenAI's new tool comes after Google in May rolled out new AI-generated summaries to top some search results pages so users don't have to click through multiple links to get quick answers to their questions. Google quickly pulled back on use of the feature after it provided false, and in some cases totally nonsensical, information, in response to some users' queries.
    The rollout of Google's tool also raised concerns among some news publishers, who worried that the AI summaries could cannibalize their web traffic by removing the need for users to visit their sites to get information - and similar concerns could arise with OpenAI's search engine.
    However, OpenAI said Thursday that it partnered with publishers to build the tool and give them options to "manage how they appear" in SearchGPT's results. It added that sites can appear in SearchGPT even if they've opted out of having their content be used to train the company's AI models."

    Comment

    We quote and translate a fifth report, which is about Meta (Facebook) and Meta AI, and was publicized on the 1st of August 2024: "Zuckerberg raves or daydreams about the future of AI at Meta
    Meta has made a huge profit thanks to booming advertising business. The Facebook group wants to invest a large part of this in artificial intelligence. Mark Zuckerberg has high confidence in his AI chatbot.
    [...]
    Already today, users are using Meta AI to "play through difficult conversations before having them with a human", Zuckerberg claimed. Or to search for information.
    The latter example was not entirely well chosen: Only hours earlier, Meta admitted that its chatbot had declared the assassination attempt on former President Donald Trump to be fiction in a conversation with some users. The company blamed this on the well-known problem of so-called "hallucinations", in which AI software simply fantasizes.

    Ten times more computing power for next AI model
    And AI visions cost a lot of money. The expenses of Meta rose by seven percent to 24.22 billion dollars in the last quarter. For this year, Meta now expects costs of between 37 and 40 billion dollars - and is preparing investors for the fact that they will grow "significantly" in 2025. Especially computing power for training AI models is expensive.
    Zuckerberg estimated that the next in-house AI model, called Llama-4, will require around ten times more computing power for training than the current version. Meta is already a major customer of the chip company Nvidia, whose systems dominate the training of AI [...].
    The demand for computing power is difficult to predict, Zuckerberg admitted. But he wants to be on the safe side: "At this point, I would rather risk building capacity before it is needed than be too late." [...]

    3.2 billion meta users daily
    CFO Susan Li also emphasized that the AI infrastructure could be used for various purposes - from training artificial intelligence to better personalizing the video selection for individual users. This could help in competition with the video platform TikTok, for example.
    [...]"

    Comment
    We recall that Meta (Facebook) also has a Graph-Based Knowledge Base (GBKB) or Knowledge Graph (KG), just like for example Alphabet (Google) with its online services, and an eXtended Mixed Reality (XMR) Environment (XMRE) or simply eXtended Reality (XR) Environment (XRE) (guess who edited XMR to XR without presenting a new expression of idea), a partial plagiarism and fake of our Ontoverse (Ov) with Collaborative Virtual Reality Environment (CVE), and so on.

    Obviously, Meta (Facebook) is only mimicking C.S. and our corporation. The amount of computing power was taken from our recent publications in relation to SoftBionics, but also resilience, where the flexibility also comes from (see the note SOPR uses DCs for validation and verification of the 23rd of July 2024).
    But if done by Meta alone, then that does not work either for various legal and technological reasons, as also discussed in this clarification.

    We quote and translate a sixth report, which is about Meta (Facebook) and Meta AI, and OpenAI, ChatGPT, and SearchGPT, Alphabet (Google), and Apple, and was publicized on the 3rd of August 2024: "[...]
    [...] Meta has plowed billions into weaving the technology into its social networking apps and advertising business, including by creating artificially intelligent characters that could chat through text across its messaging apps.
    [...] Zuckerberg said he would rather build too fast "rather than too late" to prevent his competitors from gaining an edge in the A.I. race.
    One area of A.I. that is rapidly emerging are chatbots with voice abilities, which act as virtual assistants. In May, OpenAI [...] unveiled a version of its ChatGPT chatbot that could receive and respond to voice commands, images and videos. It was part of a wider effort to combine conversational chatbots with voice assistants like the Google Assistant and Apple's Siri.
    [...]"

    Comment
    Obviously, some entities are decades too late.

    We note

  • difference between conversational chatbot and virtual assistant.

    Google Assistant and Apple's Siri are already based on our OntoBot and used on their variants of our Ontoscope (Os).

    Before we begin with our concluding comment, we note at first that in March 2011 a not so competent author has messed up at least the webpages about the fields of chatbot, virtual assistant, dialog system, conversational system, expert system, intelligent agent, question answering, and avatar, suggested to merge the contents of the webpages about chatbot, virtual assistant, and dialog system, called a dialog system a conversational agent, and an interactive online character, interactive character, or automated characters an automated online assistant, and designated dialog system, avatar, expert system, and servers and other maintaining systems as components of the automated online assistant, which reflects our OntoBot.
    Other authors finally called an automated online assistant a virtual assistant and software agent, referenced the field of Intelligent Personal Assistant (IPA) that organize (see also CALO), and removed the components nonsense, but also wrote that the term chatbot is often used interchangeably in April 2017.
    Other authors created the terms generalized chatbot, Artificial Intelligence based chatbot or AI chatbot, and automatic chatbot using AI, and also conversational AI as alternative but illegal designations for our OntoBot to deliberately confuse the members of the addressed and interested public about the true origin of our works of art.

    We also note that in older versions of the description of the chatbot Albert One only the term natural language of the term natural language programming was marked as hyperlink, but in newer versions the whole term is marked as hyperlink. The reason is that the subject natural language programming is described

  • only since the 9th of August 2008 (first version) in general and
  • as "an ontology-assisted way of programming in terms of natural-language sentences" in particular, specifically in relation to the field of reliable and trustworthy Robotic System (RS),

    and therefore seems to be our Ontologic Programming (OP), because the other fields are called

  • Natural Language Processing (NLP or NatLP),
  • Natural Language Programming (NLP) (see Fuzzy Logic (FL)==Computing with Words (CwW) (FL==CwW)),
  • Literate Programming (LP or LitP), and
  • Neuro-Linguistic Programming (NLP or NeuroLP).

    Our claim is also supported by the fact that the section AI in Natural Language Programming has been added only on the 31st of March 2024 and includes matter based on our OntoBot.

    Therefore, we can only take a large portion of the online encyclopedia contents quoted above as incorrect opinions or just nonsense and rubbish.
    Howsoever, we note that no legal loophole does exist.

    The fields of

  • chatbot,
  • virtual assistant,
  • Dialog System (DS or DiaS),
  • Conversational System (CS or ConS), including Conversational Agent System (CAS or ConAS),
  • Intelligent Agent System (IAS),
  • Cognitive Agent System (CAS or CogAS), and
  • this and that

    are not the same, but have certain relations and can be the basic parts of integrations.
    Our OS with its integrating OS Architecture (OSA), and OntoBot, OntoScope, and other OS Components (OSC) has and is all of this and that, and much more.

    A chatbot is just a computer program for the simulation of human conversation, which is based on Computational Linguistics (CL) and Natural Language Processing (NLP).

    DISCERN does the parts related to NLP and memory with a subsymbolic, unified connectionist model or ANNM, but is related to IRS, IS, and CogS, but not to DiaS and ConS.

    If an LLM has halucination, then it has no reasoning and understanding, or is crazy or mad.
    See also the note These 'R' Us of the 11th of October 2023.

    ChatGPT, Gemini, and Co. are based on integrated connectionist models or ANNMs (e.g. ANNLMs (e.g. LLMs)), which belong to the fields of Cognitive System (CS or CogS), and therefore constitute Cognitive Agent System (CAS or CogAS) variants, and Ontologics and therefore constitute OntoBot variants.

    We also already mentioned the time gaps in relation to the

  • technological progress in the fields of chatbot and virtual assistant, and related fields, and
  • creations, presentations, and discussions of our Evoos and our OS,

    specifically between

  • ELIZA (1966),
  • PARRY (1972),
  • SHRDLU (1968-1970),
  • Jabberwacky (1988), and
  • Artificial Linguistic Internet Computer Entity (A.L.I.C.E.) (also called Alicebot, or simply Alice) (1995),

    and Evoos (1999)

  • University of Rochester The Rochester Interactive Planning System (TRIPS) (2001),
  • SmarterChild (2001), and
  • Defense Advanced Research Projects Agency (DARPA) Personalized Assistant that Learns (PAL) program (2003 - 2008), including
    • Carnegie Mellon University: Reflective Agents with Distributed Adaptive Reasoning (RADAR) and
    • SRI International: Cognitive Assistant that Learns and Organizes (CALO), resulted in more than 500 publications only to arrive at our coherent OM and OntoBot,

    which are based on our Evoos and are referenced by our OS,

    and OS (2006)

  • Siri app (2010) and Apple Siri iOS (2011), which is a spin-off of CALO and based on Nuance Communications (speech recognition system), and also our Evoos (see list point before) and our OS (multimodal with gesture recognition on touchscreen in addition to speech recognition and Text User Interfaces (TUI), Graphical User Interfaces (GUIs), and Voice User Interface (VUI)), and also
  • Google Now (2012), which is based on our Evoos and our OS,
  • Samsung S Voice (2012), which is based on our Evoos and our OS,
  • Amazon Alexa (2014), which is based on Ivona (multi-lingual speech synthesizer system (2005), later also described as voice recognition, speech recognition system), Yap (speech recognition system, voice-to-text translation services, and Voice User Interface (VUI) for Web Services (WS) (2006 or 2007), later described as "Amazon Has Acquired Yap, the Closest Thing to a Siri Clone It Can Find." (2011), and also "Yap Speech Cloud was a multi-lingual speech recognition system" and "Yap Speech Cloud was a multimodal speech recognition system" (2015)), Evi (formely True Knowledge (2012)) (Question Answering (QA) System (QAS) and Semantic Search Engine (SSE) (2007) and partial plagiarism of our OntoBot), which are based on our Evoos and our OS,
  • Microsoft Cortana (2014), which is based on our Evoos and our OS,
  • Google Assistant (2016), which is based on our Evoos and our OS, and
  • Samsung Bixby (2017), which is based on our Evoos and our OS,

    and again Evoos and OS

  • Google Meena (2020), which is a chatbot based on Artificial Neural Network (ANN), and developed into Language Model for Dialogue Applications (LaMDA), seems to be based on our Evoos and our OS
  • ChatGPT (2020), which is based on our Evoos and our OS,
  • Google Language Model for Dialogue Applications (LaMDA) (2021), which is a dual process chatbot based on the integration of Conversational Agent System (CAS or ConAS) and Cognitive Agent System (CAS or CogAS), and a family of conversational large language models, which is based on our Evoos and our OS,
  • Google Pathways Language Model (PaLM) (2022), which is based on our Evoos and our OS,
  • Microsoft Copilot (Bing Chat) (2023), which is based on ChatGPT,
  • Google Gemini (Bard) (2023), collaborative AI service, which is based on our Evoos and our OS,
  • and all the other plagiarisms and fakes of our OntoBot, which followed our creations, presentations, and discussions, and also explanations of our Evoos and our OS.

    The attempts to manipulate and redefine the subject matters coincide with the time gaps of the history.
    The development is synchronized with our creations, presentations, and discussion, and also additional publications, explanations, clarifications, investigations, and so on, and the result shows that our works of art have been taken as sources of inspiration and blueprints.
    And we all do know the old and new entities, that are responsible for this mess.

    Prior art includes

  • Information Retrieval (IR) System (IRS),
  • Knowledge Representation and Reasoning (KRR),
  • unified approach, or connectionist symbol processing system, or subsymbolic Artificial Intelligence (AI) system, or parallel distributed processing system, or distributed connectionist model, or Distributed Artificial Neural Network (DANN) model, or integrated connectionist model,
    • "Symbol Processing Systems, Connectionist Networks, and Generalized Connectionist Networks" (1990),
    • "[KBANN:] Refinement of approximate domain theories by knowledge-based neural networks" (1990),
    • "Natural Language Processing With Modular Neural Networks and Distributed Lexicon" (1991), which describes the integration of the DIStributed PARaphraser (DISPAIR) system (1989) and Forming Global Representations with Extended backPropagation (FGREP) (1987 - 1989),
    • "Subsymbolic natural language processing: An integrated model of scripts, lexicon, and memory" (1993),
    • "Integrated Connectionist Models: Building AI Systems on Subsymbolic Foundations" (1994), which describes the Distributed Artificial Neural Network (DANN) model DIstributed SCript processing and Episodic memorRy Network (DISCERN) (1990),
    • "Neural Fuzzy Systems: A Neuro-Fuzzy Synergism to Intelligent Systems" (1996), specifically chapter 21.4 Fuzzy Neural Networks For Speech Recognition, and
    • "Natural Language Processing with Subsymbolic Neural Networks" (1997),
    • Evoos (1999), including coherent OM (e.g. Foundational Model (FM), Capability and Operational Model (COM), AIM, MLM, ANNM, ANNLM, LLM, etc.), Multimodal User Interface (MUI or MMUI), etc.,
  • hybrid approach, or Hybrid Symbolic-Connectionist (HSC) or Hybrid Connectionist-Symbolic (HCS) system (not to be confused with purely connectionist hybrids of the unified approach),
    • "MIX: Modular Integration of Connectionist and Symbolic Processing in Knowledge-Based Systems" (Proposal 1993) (1994),
    • "Machine Learning at the Crossroads of Symbolic and Connectionist Research" (1995),
    • "Neural Fuzzy Systems: A Neuro-Fuzzy Synergism to Intelligent Systems" (1996), and
    • Evoos (1999), including coherent OM (e.g. Foundational Model (FM), Capability and Operational Model (COM), AIM, MLM, ANNM, ANNLM, LLM, etc.),
  • Computational Linguistics (CL), including
    • Speech Processing (SP), including
      • speech recognition system and
      • speech synthesis system,
  • Natural Language Processing (NLP), including Natural Language Parsing (NLP) and Natural Language Generation (NLG), and Natural Language Understanding (NLU)
    • modular Connectionist Network (CN) or Artificial Neural Network (ANN), and
    • integrated connectionist model or ANNM (e.g. ANNLM)
      • DIstributed SCript processing and Episodic memorRy Network (DISCERN),
  • Conversational User Interface (CUI),
  • Voice User Interface (VUI), voice assistant,
  • chatbot,
  • Dialog System (DS or DiaS), comprising Dialogue Management System (DMS),
  • Conversational System (Cs or ConS), including
    • Conversational Agent System (CAS or ConAS)
    • "A Conversational Agent" (1996) with NLU, reasoning, planning, goal inferencing, etc.,
    • Evoos (1999),
    • "Towards Conversational Human-Computer Interaction" (2001) about TRIPS,
    • PAL, RADAR and CALO (2003 - 2008), and
    • "Conversational Agents" in The Practical Handbook of Internet Computing (2004),

    In the Clarification of the 8th of May 2022 we quoted the document titled "The Arrow System" of the TUNES OS, which both are referenced on the webpage Links to Software of the website of OntoLinux: "[...]
    Introduction
    The Arrow System is a way for computing devices to model and manipulate collections of information gained from the world, up to and including the higher-order reasoning structures of which humans are capable. [...]
    [...] To place this proposal within the space of software types, one should accurately identify it as offering more than an information database, but less than full artificial intelligence.
    [...]

    Model-Level Reflection
    [...]
    The definition of model-level reflection relates to the concept of the knowledge-level in current artificial intelligence research. [...]

    Knowledge Systems
    [...]
    In this way, a knowledge system that functions non-monotonically in complete isolation from human attention constitutes artificial intelligence. [...]

    The Proposal
    This paper intends to present a cybernetic system that fulfills all the stated requirements for an information system that provides system-wide model-level reflection, as well as the properties necessary to manage a complete knowledge system. As such, it should provide an excellent substrate for the development of artificial intelligence prototype systems, including expert systems and knowledge bases with a far greater utility than conventional systems. [...]

    Dynamics
    [...] The Arrow system is a general-purpose information system that expresses contexts, ontologies, and language models as first-order concepts.[...]
    [...]

    User Interface
    [...]
    The system models the ontological frame that the user employs, and interacts with the user based on that group of ontologies. In model-level terms, the system speaks directly to the ontology, and merely passes the conversation along to the user for review. The user, in turn, acts for the ontology in communicating with the system.
    [...]"

    Comment
    We note once again

  • cybernetics,
  • ontology, ontological frame,
  • statement-based data store, including triple store, tuple store, and associative database,
  • graph store,
  • GBKB or KG, expressing contexts, ontologies, and language models,
  • symbolic processing,
  • Information System (IS), including Knowledge Management System (KMS), and
  • no ML, no ANN, etc. and therefore no integrated subsymbolic and symbolic system, no ANNLM, etc..

    One can also see that LM was related with programming languages and not CL, NLP and NLU, DiaS and ConS, chatbot, ABS and MAS, CogS, and so on, because

  • on the one hand the author discussed explicitly IS, including KMS, ES, etc., and excluded explicitly other fields of AI, and
  • on the other hand he would have mentioned these other fields at least.

    The reason for this is quite simple, the author did not know exactly what we had already discussed and publicized. Therefore, a counterpart or complementary proposal to The Proposal of us was created as close and far reaching as possible.
    The integrated subsymbolic and symbolic systems, and much more were added to the Arrow System with our Evoos and all the rest with our OS.
    And only aftter we publicized The Proposal the discussion on the related mailing list also turned to the fields of Machine Learning (ML), Computational Intelligence (CI), Soft Computing (SC), etc. and their integrations with symbolic processing respectively logic-based AI. But then it was too late and the discussion shows that they were shocked.

    In the same Clarification of the 8th of May 2022 we also recalled the following: "[...]
    In the OntoLix and OntoLinux Further steps of the 4th of October 2017 we also recalled the following: "[...]

  • We also said that as one resulting functionality based on the features of the OS a user can "Speak with the Web" and "Talk to the Web" directly (see the Comment of the Day of the 5th of May 2016 and Ontologic Web Further steps of the 9th of December 2016).
  • We also said that conceptually domain names are not needed anymore (see the Ontologic Net Further steps of the 5th of July 2017 and the Ontologic Web Further steps of the 6th of July 2017).""

    From the point of view of physics, chemistry, and biology, as well as cybernetics, bionics, and ontonics we concluded that a convergence of nature and technology, a convergence of man and machine, and a convergence of mind and matter expressed by cybernetic reflection, cybernetic self-portrait, cybernetic augmentation, cybernetic extension, and cybernetic self-extension, and also digital twin, Ontologic holon (Onton), etc., and from the point of view of Algorithmic Information Theory (AIT) we concluded that the complexity of all the many explanations, learning materials, basic knowledge, programming languages, system, application, and service manuals, source codes, and all the other blah blah blah is so high that it reached a certain threshold, which is higher than doing it directly in natural multimodalities.
    As we explained in the past (begin for example with the Clarification of the 8th of May 2022), we have taken the thought, which is given in the introduction of the document titled "Conversational Agent" and quoted above, and other thoughts, which are given in many other works, further with our OS by

  • adding a Zero Ontology or Null Ontology,
  • making natural language and conversation the basis of processing and the computing substrate,
  • adding all other modalities (e.g. Natural Multimodal Processing (NMP) (Natural Multimodal Scanning (NMS) and Natural Multimodal Generation (NMG)) and Natural Multimodal Understanding (NMU),
  • formalizing the matter on the basis of ontology,
  • removing the limited domain and making it universal,
  • eliminating the hallucination, false positives, and other deficits,
  • integrating with
    • Intelligent Agent System (IAS) (e.g. BDI), Multi-Agent System (MAS), Holonic Agent System (HAS), etc.,
    • eXtended Mixed Reality (XMR),
    • Intelligent Virtual Environment (IVE),
    • Intelligent Cyber-Physical Environment (ICPE),
    • and other technologies, which need knowledge and beliefs about world, others, and self,
  • distributing and clustering everything,
  • automatizing everything, and
  • creating much more with for example the

    including Graph-Based Knowledge Base (GBKB) or Knowledge Graph (KG), and our OM, OntoBot, and the rest of our OS with its OS Architecture (OSA) and OS Components (OSC).

    In the message OntoLinux Further steps of the 20th of September 2012 and the Investigations::Multimedia of the 26th of October 2014, we referenced and discussed the probabilistic information retrieval and full text search engine library search engine Xapian in relation to the Ontologic File System (OntoFS), which is integrated with our OntoBot by the OS Architecture (OSA), and explained our Ontologics, Ontologic Computing (OC), Ontologic Programming (OP), and so on, which makes Natural Modalities (e.g. Natural Language) query languages, programming languages, operating languages, and so on.
    We also recall the integration of several Linux distributions with tens of thousands software items.

    "approach, emphasising its ability to fine-tune various top-performing AI models rather than relying on a single one" and "has access to [OpenAI] GPT-4, [Anthropic] Claude 3.5, Mistral Large, [Meta (Facebook)] Llama 3[, Alphabet (Google) Gemini]".
    Here we have somekind of a multimodel metasearch approach in addition to Foundational Model (FM) (e.g. Foundation Model (FM)). The meta already shows a common upper metalevel, which is part of the coherent Ontologic Model (OM), and is provided with the common OntoSearch and OntoFind subsystem and platform of the exclusive and mandatory infrastructures of Society for Ontological Performance and Reproduction (SOPR) and our other Societies in accordance with the ToS of our SOPR.

    fundamental principles in relation to modality, architecture, or implementation, and due to evolutionary, embryonic, epistemologic, cognitive, and reflective architecture also "function (i.e., amenability to subsequent further development), as well as has potential applications in many domains and can be applied across a wide range of use cases beyond text across a range of modalities and beyond expert systems and data-driven machine learning approaches"

    Our coherent Ontologic Model (OM), including

  • Foundational Model (FM), including
    • Foundation Model (FM), including
      • Pre-trained Language Model (PLM or PTLM),
      • Natural Language Processing Foundation Model (NLPFM),
  • Capability and Operational Model (COM),
  • AIM,
  • MLM,
  • ANNLM, including
    • LLM,
  • MultiModal LM
  • Program-aided Language Model (PLM or PALM),
  • Cognitive Model (CM or CogM)

    We also have the aspect of embyrology. Like a stem cell In multicellular organisms, which is an undifferentiated or partially differentiated cell, which can change into various types of cells and proliferate indefinitely to produce more of the same stem cell, the foundational model (e.g. foundation model) in a multilingual, multiparadigmatic, multimodal multimedia system is a stem model.
    Correspondingly, a foundational model can be adapted and fine-tuned, instruct-tuned, or trained to special models by reflection (e.g. model-reflection), Holonic Agent System (HAS), Cognitive Agent System (CAS or CogAS), and epistemology (e.g. learning). This shows that FM is somekind of a general model or upper model, and somehow its relation to epistemology, because a PTLM learns new data, which both means Ontologic Model (OM).
    In addition, our transformative, generative, and creative Bionics ... as one general way of application.

    If our Evoos or our OS has been taken as source of inspiration and blueprint for what is called foundation model, is only one relevant detail.
    Our Evoos and our OS are more general on the one end and more special on the other end and also describe the whole spectrum between both ends, including all the conversational technologies, goods (e.g. applications), and services.

    We quote an online encyclopedia about the subject foundation model: "A foundation model, also known as large AI model, is a machine learning or deep learning model that is trained on broad data such that it can be applied across a wide range of use cases.[1] Foundation models have transformed artificial intelligence (AI), powering prominent generative AI applications [...]. [...]
    Foundation models are general-purpose technologies that can support a diverse range of use cases. [...]

    [...]

    History
    Technologically, foundation models are built using established machine learning techniques like deep neural networks, transfer learning, and self-supervised learning. Foundation models are noteworthy given the unprecedented resource investment, model and data size, and ultimately their scope of application when compared to previous forms of AI. The rise of foundation models constitutes a broad shift in AI development with their scope of application, when compared to previous forms of AI and constitutes a new paradigm in AI, where general-purpose models function as a reusable infrastructure, instead of bespoke and one-off task-specific models.
    [...]"

    Comment

    What is wrongly called New RenAIssance, New AI, including what is wrongly called conversational AI, conversational chatbot, and conversational search engine, and so on, began with the creations of our Evoos in 1999 and our OS in 2006, and is an essential part of what is called Ontologics.
    For this and many other reasons, our Evoos and our OS constitute sui generis works of art, for which C.S. holds all exclusive moral rights and copyrights, and our Society for Ontological Performance and Reproduction (SOPR) demands the payment of damage compensations, and regulates the allowance and licensing for the performance and reproduction of certain parts of them, the utilization of the infrastructures of our SOPR and other Societies.

    For sure, many integrations of 2 fields exist, but nobody should count on legal security. More important and relevant is the fact that more fields have already been integrated by us.
    Some integrations are prior art or legal reproductions of works of prior art, like for example

  • GBKB or KG with Information System (IS), including Knowledge Management System (KMS),
  • GBKB or KG with Expert System (ES),

  • integrated connectionist model (e.g. ANNM) for NLP,
  • {?} integrated connectionist model as QAS, DISCERN,
  • integrated connectionist model as Information System (IS), including Knowledge Management System (KMS),
  • integrated connectionist model as Expert System (ES),
  • ...

    In the absence of prior art works, the legality is actually undecided

  • ABS with GBKB or KG is ... - Evoos with KG, potentially semantic agent, etc.
  • GBKB or KG with DiaS is ... - Evoos with KG and MMUI
  • RecS, SE, QAS with integrated ANNM is ...

  • NLP with GBKB or KG - Evoos, potentially AI
  • NLU with GBKB or KG - CogAS, Evoos, potentially semantic agent, etc.
  • DiaS with GBKB or KG - Evoos with KG and MMUI, potentially semantic agent, etc.
  • ConS with GBKB or KG - Evoos with KG and MMUI, potentially semantic agent, TRIPS, etc.
  • chatbot with GBKB or KG - Evoos with KG and MMUI, potentially semantic agent
  • virtual assistant with GBKB or KG - Evoos with KG and MMUI, potentially semantic agent, SmartKom, etc.
  • LM with GBKB or KG - Arrow System, but no AI, ML, CI, ANN, CL, NLP and NLU, AgentBS, coherent OM, FM (e.g. FM), ANNLM, LLM, etc. because of Evoos
  • integrated connectionist model for NLU -

    But many other integrations are illegal plagiarisms and fakes of our original and unique works of art, such as for example

  • chatbot based on OM (e.g. FM (e.g. FM), COM, and ANNLM (e.g. LLM)) always partial OntoBot
  • chatbot based on integrated connectionst model or integrated ANNM (e.g. ANNLM (e.g. LLM))
  • DiaM and ConM based on integrated connectionst model or integrated ANNM (e.g. ANNLM (e.g. LLM)) and OM always partial OntoBot

  • for example ChatGPT is already illegal chatbot, virtual assistant, and Information Retrieval (IR) System (IRS) based on illegal OM
  • for example SearchGPT is already illegal search engine based on illegal OM

  • FM for CL is already illegal
  • FM for NLP (NLParsing and NLGeneration) and NLU is already illegal
  • FM for Natural Language Search Engine (NLSE) is already illegal
  • Generative AI, {?!}integrated{?!}ANNM, {!?}ANNLM, and LLM for NLU is already illegal
  • integrated ANNM with GBKB or KG is already illegal, KG was not needed because Knowledge Base (KB) implemented as ANN
    See also the note KG with ANNLM is copyright infringement of the 21st of February 2024.
  • video gaming with FM, ANNM, ANNLM, LLM is already illegal - OntoBot, OntoScope
  • VR, MR, XMR orXR with FM, ANNM, ANNLM, LLM is already illegal - OntoBot, OntoScope, OntoCoVE, OntoNet, OntoWeb, OntoVerse, OntoAppsOntoServices
  • CAx, including Low-Code and No-Code development systems, with FM, ANNM, ANNLM, LLM is already illegal - OntoBot, OntoCAx, and OntoBlender
  • Semantic (World Wide) Web (SWWW) with FM, ANNM, ANNLM, LLM is already illegal
  • Linked Data (LD) with FM, ANNM, ANNLM, LLM is already illegal
  • Conversational AI as a Service is already illegal - OntoBot, OntoNet, OntoWeb, OntoVerse, OntoAppsOntoServices, SoftBionics as a Service (SBaaS)
  • Robotic Process Automation (RPA) with FM, ANNM, ANNLM, LLM is already illegal, Intelligent Automation (IA) (SoftBionics (SB) and Robotic Process Automation (RPA)) already highly problematic alone

    Old chatbot has parts of NLP, but not parts of NLU, NLG, and DMS.
    online encyclopedia: New chatbot has integrated ANNLM, which also includes DMS, and therefore is ConAS.
    A ConAS can have Task Management System (TMS), virtual assistant.
    A virtual assistant can have CUI, VUI, NLUI, chatbot, and ConAS.

    The Distributed Artificial Neural Network (DANN) model DIstributed SCript processing and Episodic memorRy Network (DISCERN)) is related to LM, NLP, and CogS and has episodic memory, but no KG, DiaS, ConS, MUI, etc..

    Somehow, we have the impression that someone wanted to redefine the scope of the field of Conversational System (CS or ConS) beyond the field of Dialog System (DS). Similar observations have been made with chatbot, Natural Language Generation (NLG), virtual assistant, and so on. Our fans and readers should be able to guess why after reading our related publications, including this clarification.
    But neither a VUI nor a chatbot understands the meaning of an interaction with a user (see also the quotes about the subjects chatbot and Dialog System (DS) below).
    Furthermore, a voice assistant is a virtual assistant with VUI.

    The variants

  • Search Engine (SE) with conversational chatbot as User Interface (UI) and even Natural Language User Interface (NLUI), "chatbot-like interface, allowing users to ask questions in natural language" Perplexity AI,
  • chatbot as Conversational User Interface (UI) to Search Engine (SE), and
  • conversational Search Engine (SE) with Natural Language User Interface (NLUI)

    are not similar, but more or less the same, and any difference does not matter anyway due to the

  • OS Architecture (OSA), which integrates (unifies and hybridizes) all indiviudal functionalities of IS (e.g. KMS), nformation Retrieval (IR) System (IRS) (e.g. Recommendation System or Recommender System (RecS), and also Search System (SS) or Search Engine (SE), and Question Answering (QA) System (QAS)), Knowledge-Based System (KBS), Knowledge Representation and Reasoning (KRR), Knowledge Retrieval (KR) System (KRS) (e.g. Semantic Search Engine (SSE)), Information System (IS) (e.g. Knowledge Management System (KMS), and SoftWare Agent-Based System (SWABS) (software robot) all in one, and
  • basic properties of (mostly) being reflective, holonic, fractal,

    which simply said means all OntoBot, OntoSearch and OntoFind, and integrations are included in addition to batteries and original and unique works of art.

    So we have an essential part of our reflective, holonic OntoBot based on our OS Architecture (OSA) integrating all in one in a liquid way.
    Add a Search Engine (SE) and we have an essential part of our OntoSearch and OntoFind.
    Add a social media platform and we have an essential part of our OntoSocial.
    We also have Ontologic Applications and Ontologic Services (OAOS) based on the fields of IRS, KRR, and KR, and also on our coherent OM, OntoBot, and so on: "We have found out that OntoLinux can give a user undreamed-of possibilities in a number of different ways by:

  • allowing agents to be more proactive [or predictive] by letting them infer the goals of the user and ways to help the user achieve those goals,
  • simplifing the interface to machines by giving the user helpful recommendations,
  • simplifing the interface to complex applications by allowing the user to interact with them through common language, gesture, and sense,
  • helping the user to retrieve related informations, knowledge, and processes, and
  • improving context sensing by making use of multimodal user interfaces and other sensors."


    In the cases of Perplexity, SearchGPT, etc. we have chatbot, Conversational System (CS or ConS), and Information Retrieval (IR) System (IRS), specifically Search System (SS) or Search Engine (SE), and Question Answering System (QAS).
    In the case of Meta AI we also have a Social and Societal System (S³).
    In the case of Amazon Alexa we also have a Semantic Search Engine (SSE).

    Obvioiusly, we have causal links with our OntoSearch and OntoFind and also out OntoSocial, which are based on or integrated with our

  • coherent OM,
  • transformative, generative, and creative Bionics, and
  • OntoBot,

    which were used as source of inspiration and blueprint.
    Indeed, the legal scope depends on prior art (e.g. integrated connectionist model, Semantic (World Wide) Web (SWWW), etc.), but

  • on the one hand we have (more and more) problems to find decisive works of prior art, and
  • on the other hand we always find (more and more) artistical, technological, historical, and legal evidences, which support our claims.

    The matter has already been discussed to some extent in the Clarification of the 29th of May 2024.
    But interestingly, entities keep certain fields separated, which does not improve their legal position, because we have shown that the separated fields are already infringements of the rights and properties of C.S. and our corporation.

    Once again, the lack of prior art works emphasizes our explanations, clarifications, and claims.
    This has consequences for Natural Language (NL) Search Engine (SE) (NLSE) in general, and Perplexity AI, Microsoft Bing and Copilot, Apple Siri and NLSE, Google Intelligence, Samsung Intelligence, Apple Intelligence, Meta Intelligence, Atlassian Intelligence, Harmony Intelligence, etc., and so on in particular.
    We also already wondered why Google is hesitating with Google Search and Gemini, though together with Samsung we see multimodal (visual and textual) search, and Amazon is hesitating with Amazon Search and Alexa, though it added Rufus and Q, and invested in Anthropic, {correct?} Mistral, and Perplexity AI.

    So let us calculate and connect the dots to get the line, which separates legal from illegal.

    See also the note OpenAI still in LaLaLand of the 16th of March 2023 (keywords exclusive and ToS).

    Like all the crypto crap start-ups, all these AI crap start-ups have no viable business model and are not sustainable due to the cost for computing power and so on. But the same also holds for already established companies. Their infringements of the rights and properties of C.S. and our corporation become even more obvious in this context, because all sources of income are part of our business models, which shows once again that they interfere with, and also obstruct, undermine, and harm the exclusive moral rights respectively Lanham (Trademark) rights (e.g. exploitation (e.g. commercialization (e.g. monetization))).
    We also have here acts of abuse of market power, wire fraud, investment fraud, blackmailing, racketeering, conspiracy (with e.g. Nvidia), etc., etc., etc..
    The laws are crystal clear, the first one is the one, who got the exculsive rights granted by the societies for being creative and disclosing, presenting, and discussing the creation with the public, and hence is in command.
    The last one, who thought to be very clever and was supported by politicians and governments, is now in jail for 25 years.

    So they must steal more of our business to be able to finance it, because we can finance it by legal activities.

    As we already explained before, we must not modify our OS with its OSA, OSC, Ontoverse (Ov), and Ontoscope (Os) and even if that would be the case, then we would not need to provide essential facilities, in case this infringes our moral rights, because the complete OS with its OSA, OSC, Ov, and Os has been created by C.S.. This is a totally different legal situation than the integration of for example an operating system (os) with a web browser, Search Engine (SE), chatbot, or another technology, good, and service.

    As we already said before, for example in the ... of the 24th of July 2024 "is only useful in a wider scope of technologies, goods, and services".

    See also the notes

  • [This and That] 'R' Us,
  • OSC standard and mandatory with OsC of the 26th of July 2024, and
  • Social media services are copyright infringements of today.

    Some questions:
    Why have Nvidia and Jeff Bezos invested in Perplexity AI in January 2024?
    What kind of strategy of Amazon respectively Bezos is that with LLMs, hosting, training, and so on?
    FWIW Atlassian? What is that? "Microsoft [(e.g. GitHub)], which is one of Atlassian's top rivals [(e.g. Git-based Bitbucket)], is a large financial backer of OpenAI. Consequently, when GPT-4 responds to user input such as a request for information [...], the underlying computing work happens in a cloud service run by Microsoft."
    Why SearchGPT despite Bing with Copilot (Bing Chat) already exists?


    04.August.2024

    03:02, 09:15, and 12:11 UTC+2
    Investors want to steal our royalties and profits

    National and multinational

  • investment banks, like for example JPMorgan Chase, Bank of America, Goldman Sachs, Morgan Stanley, Citigroup, Deutsche Bank, Credit Suisse→UBS, Barclays, and others,
  • investment advisory service providers,
  • investment management companies (private equity firms),
  • venture capital investors,
  • angel investors,
  • and so on

    do know exactly that they have conspired with other entities in relation to the infringements of the rights and properties of C.S. and our corporation, specifically by investing in plagiarisms and fakes of our original and unique works of art.
    And as not expected otherwise, they now begin to complain about the high investments in the infrastructures of our Society for Ontological Performance and Reproduction (SOPR) and our other Societies, because they want to continue with stealing our royalties and profits as well as businesses.

    We always said that we will reinvest most of our royalties and profits in our businesses.
    In addition, these investments are also securing the core businesses of other companies.
    And yes, this level of investment is sustainable, because these are the rights and properties, including the money, of C.S. and our corporation and we do what we want with them.
    An alternative would be to increase the distribution or ratio of shares, for example by adding 1% each fiscal quarter and no investments in infrastructure.
    We do get all of our rights and properties back and also damage compensations, which are the higher of the apportioned compensation, profit, and value, or 2 or more of them.
    So either they

  • pay relative low royalties and we get the majority of the shares, if their prices are high and the issue is resolved with the fast track damage compensation option, comprising the establishment of Joint Ventures (JVs), or
  • pay relative high royalties and we buy the majority of the shares, if their prices are low and the issue is resolved in a usual way.

    The result is always the same. See also the note OpenAI 'R' Us, one way or another of the 10th of July 2024.

    The alternative to our fast track option is the insolvency application in many cases. Therefore, investors must be very careful in what they will do in the next future.
    We also said that investors have to pay damages as well, specifically if they were aware about the true legal situation.

    Some questions:
    FWIW Databricks? What is that? Why has Microsoft invested in Databricks in February 2019 even together with lead investor Andreessen Horowitz, and Amazon→Amazon Web Services, Alphabet (Google)→CapitalG, and others in January 2021, and also Nvidia and others in September 2023?
    Same as with OpenAI, Anthropic, Mistral, Nvidia, and others, we demand 99% of all shares of Databricks as damage compensations, specifically due to the reason that all of its Free and Open Source Software (FOSS) projects are infringements of the rights and properties of C.S. and our corporation.
    We are also trying to estimate the damages, that companies like Berkeshire Hathaway have to pay us.
    Several patterns of serious criminal activities are obvious, including wire fraud, conspiracy, and so on alone and together with other investors and start-ups since at least 2010.


    06.August.2024

    12:24 UTC+2
    Only prior art, C.S.' art, and own art at OpenAI

    We quote a report: "[...]
    The new lawsuit, filed against OpenAI, Altman and co-founder Gregory Brockman, made the same claims[, which were made in the first lawsuit before ...] and is nearly double in length. In contrast to the original suit, it includes claims that OpenAI is engaging in racketeering activity.
    [...]
    The lawsuit alleges that [the plaintiff] was "betrayed by Altman and his accomplices. The perfidy and deceit are of Shakespearean proportions."
    [...]
    The 83-page lawsuit claims OpenAI's partnership with Microsoft "flipped the narrative" of the company's original mission.
    "In partnership with Microsoft, Altman established an opaque web of for-profit OpenAI affiliates, engaged in rampant self-dealing, seized OpenAI, Inc.'s Board, and systematically drained the non-profit of its valuable technology and personnel," the lawsuit said.
    [...]
    "[The plaintiff]'s case against Sam Altman and OpenAI is a textbook tale of altruism versus greed. Altman, in concert with other Defendants, intentionally courted and deceived [the plaintiff], preying on [the plaintiff]'s humanitarian concern about the existential dangers posed by artificial intelligence," the lawsuit said.
    The lawsuit seeks "a constructive trust on Defendants' ill-gotten gains, property, and assets traceable to [the plaintiff]'s significant contributions to OpenAI" and that "a judicial determination that OpenAI, Inc.'s license to Microsoft is null and void.""

    Comment
    We are not sure why the plaintiff files a lawsuit against the company OpenAI, which in large parts reflects our claims and demands, and what the goal and the result of this lawsuit will be.
    But we do know that the plaintiff must show, what exactly is his art and contribution, and that it is definitely not prior art and the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S..

    Indeed, the license of OpenAI to Microsoft is null and void, but not for the reason as claimed in this lawsuit.

    Eventually, we recommend once again to tell the truth about the true source of inspiration and blueprint.


    07.August.2024

    08:51 and 17:18 UTC+2
    Investment banks, investors, asset managers, and Co. 'R' Us

    In most cases a takeover or Joint Venture (JV) is not unlikely.

    We get all of our moneys back. :)

    See also the notes

  • Without complete damages no essential facilities of the 13th of June 2024 and
  • SOPR will not give up on controls and damages of the 19th of June 2024.

    10:45 UTC+2
    SOPR considering temporary OntoBot choice screen

    Ontologic roBot (OB or OntoBot)

    In wise foresight of various legal and technological developments in relation to our OntoBot, and OntoSearch and OntoFind OSComponents (OSC) we already publicized the note OSC standard and mandatory with OsC of the 26th of July 2024:
    "Actually, we are considering the OSC implementations

  • Google Search and Google Gemini and Google Assistant,
  • Microsoft Bing and Copilot, and
  • Amazon search and Rufus, Q, and Alexa,

    and also

  • Apple Siri,

    as well as

  • some few other implementers of our OSC."

    But we think to list no Search Engines (SEs), but directly our OntoBot (variants):

  • Google Gemini, or Gemini Search or Gemini Find
  • Microsoft Copilot, or Copilot Bing or Copilot Find
  • Amazon Alexa, or Alexa Evi or Alexa Find
  • Apple Siri, or Siri Search or Siri Find

    The provision of 4 web browsers as options to choose from was sufficient in the case of the company Microsoft and its Internet Explorer web browser.

    SearchGPT, Perplexity AI, and You.com might be added, but only after the transfer of 99% of shares as damage compensations in each case.

    But as we said in a note before, "this is only a temporary solution, because the correct way is to say that it is not Linux, Android, FireOS, etc., iOS, etc., Windows, etc., and Harmony, but Ontologic System (OS), OntoL4, OntoLinux, OntoLix, OntoWin, etc., with our OS Architecture (OSA), and our OntoBot, OntoScope, OntoSearch and OntoFind, and OntoSocial, and also OntoNet, OntoWeb, and OntoVerse, as well as other OS Components (OSC), which are standard and mandatory with access places and access devices (e.g. Ontoscope (Os), also wrongly called smartphone, AI phone, etc.) in the legal scope of ... the Ontoverse (Ov)".

    See also the related messages, notes, explanations, clarifications, investigations, and claims of the last months, specifically

  • Some thoughts and notes of the 22nd of July 2024, and
  • Clarification of the 2nd of August 2024.


    11.August.2024

    10:14 UTC+2
    Comment of the Day

    "On the ingeniousness of "ontologic programming".", [C.S., Today]

    In this sense:
    "On the foolishness of "natural language programming".", [Edsger Wybe Dijkstra, 1979]
    See also the Clarification of the 2nd of August 2024 (keyword natural language programming).

    See also the related messages, notes, explanations, clarifications, investigations, and claims, specifically

  • Clarification of the 27th of April 2016,
  • Clarification of the 29th of April 2016,
  • Comment of the of the 5th of May 2016,
  • OS is ON of the 9th of May 2016,
  • Ontologic Web Further steps of the 9th of December 2016, and
  • Clarification 15th of April 2017,

    and the other publications cited therein.

    22:29 UTC+2
    Clarification

    *** Work in progress - some better comments (e.g. semantic search vs. semantic similarity search) ***
    After we discussed various matters in general since several years and in more detail in the last past, for example in the

  • Clarification of the 29th of May 2024 and
  • Clarification of the 2nd of August 2024,

    the newest insights and activities led us to focus on the integration of the matters discussed in the last clarifications and the fields of

  • Connectionist Network (CN or CNet) or artificial neural networks (ANN),
  • integrated subsymbolic and symbolic system, including unified and hybrid system,
  • Information Retrieval (IR) System (IRS), including Information Filtering (IF) System (IFS), including Recommendation System or Recommender System (RecS), and also Search System (SS) or Search Engine (SE), and Question Answering (QA) System (QAS),
  • Knowledge Representation and Reasoning (KRR), including
    • Semantic Net (SN),
    • Graph-Based Knowledge Base (GBKB) or Knowledge Graph (KG),
    • etc.,
  • Knowledge Retrieval (KR) System (KRS), including
    • Semantic Search Engine (SSE),
  • Information System (IS), including
    • Knowledge Management System (KMS),
  • Computational Linguistics (CL), including
    • Speech Processing (SP), including
      • speech recognition and
      • speech synthesis,
  • Natural Language Processing (NLP), including
    • Natural Language Parsing (NLP) and
    • Natural Language Generation (NLG),
  • Natural Language Understanding (NLU),
  • Dialog System (DS or DiaS), comprising Dialogue Management System (DMS),
  • Conversational System (CS or ConS), including
    • Conversational Interface (CI),
    • Conversational User Interface (CUI), and
    • Conversational Agent System (CAS or ConAS),
  • chatbot,
  • virtual assistant,
  • Intelligent Agent System (IAS),
  • etc.,

    which are included in our

  • Ontologic System Components (OSC)
    • Ontologic roBot (OntoBot),
    • Ontologic Search (OntoSearch) and Ontologic Find (OntoFind), and
    • OntoSocial,

    and

  • Ontologic Applications and Ontologic Services (OAOS).

    We quote a wiki hypertext publication about the subject word embedding, which was publicized on the 14th August 2014 (first version): "Word embedding is a general technique in Natural Language Processing where words from the vocabulary (and possibly phrases thereof) are mapped to vectors of real numbers in a low dimensional space, relative to the vocabulary size ("continuous space").
    There are several methods for generating this mapping. They include neural networks[1], dimensionality reduction on the word co-occurrence matrix[2], and explicit representation in terms of the context in which words appear[3].
    Word and phrase embeddings, when used as the underlying input representation, have been shown to boost the performance in NLP tasks such as syntactic parsing[4] and sentiment analysis[5].
    [...]"

    Comment
    We note

  • no word2vec (2013) (added on the 20th of February 2016),
  • no Probabilistic Model (PM or ProM) (added on 25th of May 2016),
  • no explainable knowledge base method (added on 9th of June 2018), and
  • no knowledge-based method (e.g. KG) (added on 9th of June 2018).

    No relevant prior art is referenced, which was publicized before 2000.

    We quote a wiki hypertext publication about the subject word embedding: "In natural language processing (NLP), a word embedding is a representation of a word. The embedding is used in text analysis. Typically, the representation is a real-valued vector that encodes the meaning of the word in such a way that the words that are closer in the vector space are expected to be similar in meaning.[1 [Speech and language processing: an introduction to natural language processing, computational linguistics, and speech recognition] Word embeddings can be obtained using language modeling and feature learning techniques, where words or phrases from the vocabulary are mapped to vectors of real numbers.
    Methods to generate this mapping include neural networks,[2] dimensionality reduction on the word co-occurrence matrix,[3][4][5] probabilistic models,[6] explainable knowledge base method,[7] and explicit representation in terms of the context in which words appear.[8]
    Word and phrase embeddings, when used as the underlying input representation, have been shown to boost the performance in NLP tasks such as syntactic parsing[9] and sentiment analysis.[10]

    Development and history of the approach
    In distributional semantics, a quantitative methodological approach to understanding meaning in observed language, word embeddings or semantic feature space models have been used as a knowledge representation for some time.[11] Such models aim to quantify and categorize semantic similarities between linguistic items based on their distributional properties in large samples of language data. The underlying idea that "a word is characterized by the company it keeps" was proposed in a 1957 article by John Rupert Firth,[12] but also has roots in the contemporaneous work on search systems[13] and in cognitive psychology.[14]
    The notion of a semantic space with lexical items (words or multi-word terms) represented as vectors or embeddings is based on the computational challenges of capturing distributional characteristics and using them for practical application to measure similarity between words, phrases, or entire documents. The first generation of semantic space models is the vector space model for information retrieval.[15][16][17] Such vector space models for words and their distributional data implemented in their simplest form results in a very sparse vector space of high dimensionality (cf. curse of dimensionality). Reducing the number of dimensions using linear algebraic methods such as singular value decomposition then led to the introduction of latent semantic analysis in the late 1980s and the random indexing approach for collecting word co-occurrence contexts.[18][19 [From words to understanding. Foundations of Real-World Intelligence. [2001]]][20][21] In 2000, Bengio et al. provided in a series of papers titled "Neural probabilistic language models" to reduce the high dimensionality of word representations in contexts by "learning a distributed representation for words".[22 [A Neural Probabilistic Language Model. [2000]]][23 [A Neural Probabilistic Language Model. Journal of Machine Learning Research. [2003]]][24 [A Neural Probabilistic Language Model. Studies in Fuzziness and Soft Computing. [2006]]]
    A study published in NeurIPS (NIPS) 2002 introduced the use of both word and document embeddings applying the method of kernel CCA to bilingual (and multi-lingual) corpora, also providing an early example of self-supervised learning of word embeddings[25 [Inferring a semantic representation of text via cross-language correlation analysis. [2002]]]
    [...]
    The approach has been adopted by many research groups after theoretical advances in 2010 had been made on the quality of vectors and the training speed of the model, as well as after hardware advances allowed for a broader parameter space to be explored profitably. In 2013, a team at Google led by Tomas Mikolov created word2vec, a word embedding toolkit that can train vector space models faster than previous approaches. The word2vec approach has been widely used in experimentation and was instrumental in raising interest for word embeddings as a technology, moving the research strand out of specialised research into broader experimentation and eventually paving the way for practical application.[31]
    [...]"

    Comment
    As others and we already noted in relation to neural LM no prior art is referenced, which was publicized before 2000,
    Also, no prior art other than LSA, Singular Value Decomposition (SVD) and LSI, integrations, etc..
    See also the note These 'R' Us of the 11th of October 2023.
    Specifically interesting are the facts that

  • neural LMs only since 2000,
  • LM is Probabilistic Model (PM or ProM) of an NL,
  • probabilistic IRMs include LM,
  • "Can Artificial Neural Networks Learn Language Models?" (2000), and
  • "A Neural Probabilistic Language Model" (2000 and 2003).

    We quote the webpage titled "A brief history of word embeddings (and some clarifications)" and publicized on the 30th of September 2015: "One of the strongest trends in Natural Language Processing (NLP) at the moment is the use of word embeddings, which are vectors whose relative similarities correlate with semantic similarity. [...]
    [...]
    First, a note on terminology. Word embedding seems to be the dominating term at the moment, no doubt because of the current popularity of methods coming from the deep learning community. In computational linguistics, we often prefer the term distributional semantic model (since the underlying semantic theory is called distributional semantics). There are also many other alternative terms in use, from the very general distributed representation to the more specific semantic vector space or simply word space.
    [...]
    Methods for using automatically generated contextual features were developed more or less simultaneously around 1990 in several different research areas. One of the most influential early models was Latent Semantic Analysis/Indexing (LSA/LSI), developed in the context of information retrieval, and the precursor of today's topic models. At roughly the same time, there were several different models developed in research on artificial neural networks that used contextual representations. The most well-known of these are probably Self Organizing Maps (SOM) and Simple Recurrent Networks (SRN), of which the latter is the precursor to today's neural language models. In computational linguistics, Hinrich Schütze developed models that were based on word co-occurrences, which was also used in Hyperspace Analogue to Language (HAL) that was developed as a model of semantic memory in cognitive science.
    Later developments are basically only refinements of these early models. Topic models are refinements of LSA, and include methods like probabilistic LSA (PLSA) and Latent Dirichlet Allocation (LDA). Neural language models are based on the same application of neural networks as SRN, and include architectures like Convolutional Neural Networks (CNN) and Autoencoders. Distributional semantic models are often based on the same type of representation as HAL, and includes models like Random Indexing and BEAGLE.
    The main difference between these various models is the type of contextual information they use. LSA and topic models use documents as contexts, which is a legacy from their roots in information retrieval. Neural language models and distributional semantic models instead use words as contexts, which is arguably more natural from a linguistic and cognitive perspective. These different contextual representations capture different types of semantic similarity; the document-based models capture semantic relatedness (e.g. "boat" - "water") while the word-based models capture semantic similarity (e.g. "boat" - "ship"). This very basic difference is too often misunderstood. Speaking of common misunderstandings, there are two other myths that need debunking:

  • There is no need for deep neural networks in order to build good word embeddings. In fact, two of the most successful and acknowledged recent models - the Skipgram and CBoW models included in the word2vec library - are shallow neural networks of the same flavor as the original SRN.
  • There is no qualitative difference between (current) predictive neural network models and count-based distributional semantics models. Rather, they are different computational means to arrive at the same type of semantic model; several recent papers [(2014 - 2015)] have demonstrated both theoretically and empirically the correspondence between these different types of models [...].

    [...]"

    Comment
    Yes, it is as it is.

    And as we already explained in relation to the field of Language Model (LM) (see also the note These 'R' Us of the 11th of October 2023 (chapter 4) and the Clarification of the 2nd of August 2024), our Evoos is based on model-reflection, Graph-Based Knowledge Base (GBKB) or Knowledge Graph (KG), Information System (IS), including Knowledge Management System (KMS), Expert System (ES), which expresses contexts, ontologies, and language models, and also, logics, AI, ML, CI, ANN, ABS, MAS, HAS, CogAS, and also Ontologic Computing (OC), including generalized Soft Computing (SC), integrating

  • Many-Valued Logics (MVL) (e.g. FL=Computing with Words (CwW)),
  • Connectionist Network (CN or CNet) or Artificial Neural Network (ANN),
  • Probabilistic Model (PM or ProM), and
  • Evolutionary Computing (EC), and also
  • integrations of these fields (e.g. Neural Fuzzy System or Fuzzy Neural Integrated System),

    and much more.
    It should be easy to see the integrated ANN, which use contextual representations.
    This results in our Ontologic Model (OM), which includes Foundational Model (FM) (e.g. Foundation Model (FM)), Capability and Operational Model (COM), ANNM, ANNLM, and LLM.
    Our OS adds hypergraph and so on, so that for example Hyperspace Analogue to Language (HAL) is also included, though semantic memory already belongs to the Cognitve System architecture of Evoos.

    Also note that there is no qualitative difference.

    For a better understanding, we also quote the document titled "Neural Fuzzy Systems: A Neuro-Fuzzy Synergism to Intelligent Systems" (1996), specifically its chapter 21.4 Fuzzy Neural Networks For Speech Recognition: "[...]
    Another problem arises when combining neural networks with a language model in which top-N candidate performance is required. This problem derives from simply using discrete phoneme class information as a target value in the conventional method. Thus, the neural network is trained to produce the top phoneme candidate but not the top-N candidates. However, the top-N phoneme candidate information is very important when combined with a language model in continuous speech recognition. [...]
    One way to overcome these problems is to take the fuzzy neural approach, which considers the collection of patterns a fuzzy set where each pattern identifies itself with a continuous membership value. [...], the fuzzy neural training method and the conventional neural training method for speech recognition are both based on the back-propagation algorithm. However, they differ in how they give the target values to the neural network. In the conventional method, target values are given as discrete phoneme class information, that is, 1 for the belonging class and 0s for the others. In the fuzzy neural training method, the target values are given as fuzzy phoneme class information whose values are given in between 0 and 1. [...]
    Fuzzy phoneme class information can be modeled by considering the distance, for instance, the Euclidean distance measure between the input sample and the nearest sample of each phoneme class in the training set [A neural fuzzy training approach for continuous speech recognition improvement, 1992]. [...]"

    Comment

    See also the "Forming Global Representations with Extended backPropagation" (FGREP) (1988) method, mechanism, and architecture of flat or shallow Artificial Neural Network Models (ANNMs) (e.g. SRN) with real-valued numbers as vector values for syntactic data representation or embeddings.
    "The FGREP representations are found to encode the implicit categorization of words, to form a basis for robust processing and to facilitate generalization." "Natural Language Processing With Modular Neural Networks and Distributed Lexicon" (1991)
    The FGREP is based on flat or shallow Artificial Neural Network Models (ANNMs).
    Also compare the word2vec model family of flat or shallow, two-layer Artificial Neural Network Models (ANNMs) (e.g. SRN) with real-valued numbers as vector values, that can be negative with the

  • set-theoretic Information Retrieval Models (IRMs) with transcendent term-interdependencies (e.g. Fuzzy Set Model (FSM) and
  • algebraic IRMs with transcendent term-interdependencies (e.g. Topic-based Vector Space Model (TVSM), ontology-based or enhanced TVSM (eTVSM), and ANN trained by backpropagation method).

    If one takes the interval [-0.5 ... 0 ... 0.5] instead of the interval [0... 0.5 ... 1] is a matter of subjective taste, but definitely not a new expression of idea or creation, and not an inventive step or innovation.

    Our Evoos with its Evoos Architecture (EosA) integrates

  • dynamic KG, expressing contexts, ontologies, and language models, and also ontological frame, and
  • Ontologic Computing (OC), including generalized Soft Computing (SC), integrating
    • Many-Valued Logics (MVL) (e.g. FL=Computing with Words (CwW)),
    • Connectionist Network (CN or CNet) or Artificial Neural Network (ANN),
    • Probabilistic Model (PM or ProM), and
    • Evolutionary Computing (EC), and also
    • integrations of these fields (e.g. Neural Fuzzy System or Fuzzy Neural Integrated System),
  • and much more

    and has the basic properties of for example the DISPAR and DISCERN systems based on the FGREP method, mechanism, and architecture with global lexicon.
    It should be easy to see the integrated ANN, which use contextual representations, and the lexicon included in the semantic structure makes syntactic data representation to semantic representation and with ANN word embedding.
    Evoos also has dynamic or contextualized embeddings.
    Our OS with its OS Architecture (OSA) integrates ANN with TVSM and also Speech Recognition (SR), which includes FGREP and hence the functionality of word2vec.

    We quote a wiki hypertext publication about the subject Retrieval Augmented Generation (RAG): "Retrieval augmented generation (RAG) is a type of information retrieval process. It modifies interactions with a large language model (LLM) so that it responds to queries with reference to a specified set of documents, using it in preference to information drawn from its own vast, static training data. This allows LLMs to use domain-specific and/or updated information.[1] Use cases include providing chatbot access to internal company data, or giving factual information only from an authoritative source.[2]

    Process
    The RAG process is made up of four key stages. First, all the data must be prepared and indexed for use by the LLM. Thereafter, each query consists of a retrieval, augmentation and a generation phase.[1]

    Indexing
    The data to be referenced must first be converted into LLM embeddings, numerical representations in the form of large vectors. RAG can be used on unstructured (usually text), semi-structured, or structured data (for example knowledge graphs).[1] These embeddings are then stored in a vector database to allow for document retrieval.

    [Image caption:] Overview of RAG process, combining external documents and user input into an LLM prompt to get tailored output

    Retrieval
    Given a user query, a document retriever is first called to select the most relevant documents which will be used to augment the query.[3] This comparison can be done using a variety of methods, which depend in part on the type of indexing used.[1]

    Augmentation
    The model feeds this relevant retrieved information into the LLM via prompt engineering of the user's original query.[2] Newer implementations (as of 2023) can also incorporate specific augmentation modules with abilities such as expanding queries into multiple domains, and using memory and self-improvement to learn from previous retrievals.[1]

    Generation
    Finally, the LLM can generate output based on both the query and the retrieved documents.[4] Some models incorporate extra steps to improve output such as the re-ranking of retrieved information, context selection and fine tuning.[1]

    [...]

    Challenges
    If the external data source is large, retrieval can be slow. The use of RAG does not completely eliminate the general challenges faced by LLMs, including hallucination.[3]"

    Comment
    About what are we talking since 1999 and 2006?
    Guess where the term augmentation comes from.
    One source is querying and augmenting vector spaces and Latent Semantic Indexing (LSI) vector spaces in the fields of Information Retrieval (IR) and Latent Semantic Analysis (LSA), which is a technique in NLP and also called LSI, when utilized in IR on the basis of algebraic IR models

  • without term-interdependencies (e.g. Vector Space Model (VSM)),
  • with immanent term-interdependencies (e.g. Generalized Vector Space Model (GVSM), Latent Semantic Index (LSI)), and
  • with transcendent term-interdependencies (e.g. Topic-based Vector Space Model (TVSM), ontology-based or enhanced TVSM (eTVSM), and ANN trained by backpropagation method).

    Classification of Information Filtering (IF) and Information Retrieval (IR) Models (2003 and 20060117)
    PD Dominik Kuropka, Thomas Hofmann, and C.S.

    vector store index see Ontologic data storage Base (OntoBase) and the message OntoLix and OntoLinux Further steps of the 20th of February 2014

    The first complete version of the webpage was publicized only on the 16th of July 2024 and got one major update on the 24th of July 2024.

    We also note

  • Knowledge Representation and Reasoning (KRR) (semantic structures or knowledge representation formalisms) and
  • Semantic Serach Engine (SSE).

    RAG does not solve the deficits of LLM at all, because they are system immanent and LLM with RAG is already no LLM anymore.
    The latter problem with the generation of output respectively response shows why the subsymbolic, connectionist, probabilistic, and statistic brute force approach (e.g. Machine Learning (ML), including Artificial Neural Network (ANN), including Deep Learning (DL or DeepL), and specifically Large Language Model (LLM)) is not sufficient, specifically not efficient.
    As we said in the Clarification of the 29th of May 2024, removing the remaining 14.5%, 23.43% or higher ratios of halucinations and the other deficitis is the most difficult task.

    We quote the document titled "Build AI Apps with Retrieval Augmented Generation (RAG)" and publicized on the 8th of January 2024: "[...]
    What is Retrieval-Augmented Retrieval Generation (RAG)?
    So, what's the deal with Retrieval Augmented Generation, or RAG? First, let's keep in mind what Large Language Models (LLMs) are good at - generating content through natural language processing (NLP). If you ask a Large Language Model to generate a response on data that it has never encountered (maybe something only you know, domain specific information or up to date information that the large language models haven't been trained on yet), it will not be able to generate an accurate answer because it has no knowledge of this relevant information.

    Enter Retrieval Augmented Generation
    Retrieval Augmented Generation acts as a framework for the Large Language Model to provide accurate and relevant information when generating the response. To help explain "RAG," let's first look at the "G." The "G" in "RAG" is where the LLM generates text in response to a user query referred to as a prompt. Unfortunately, sometimes, the models will generate a less-than-desirable response.
    [...]
    [...]
    Here are some of the problems that we face with the response generated:

  • No source listed, so you don't have much confidence in where this answer came from
  • Out-of-date information
  • Answers can be made up based on the data the LLM has been trained on; we refer to this as an AI hallucination
  • Content is not available on the public internet, where most LLMs get their training data from.

    [...]
    So, what is the point of using an LLM if it will be problematic? This is where the "RA" portion of "RAG" comes in. Retrieval Augmented means that instead of relying on what the LLM has been trained on, we provide the LLM with the correct answer with the sources and ask to generate a summary and list the source. This way, we help the LLM from hallucinating the answer.
    We do this by putting our content (documents, PDFs, etc) in a data store like a vector database. In this case, we will create a chatbot interface for our users to interface with instead of using the LLM directly. We then create the vector embeddings of our content and store it in the vector database. When the user prompts (asks) our chatbot interface a question, we will instruct the LLM to retrieve the information that is relevant to what the query was. It will convert the question into a vector embedding and do a semantic similarity search using the data stored in the vector database. Once armed with the Retrieval-Augmented answer, our Chatbot app can send this and the sources to the LLM and ask it to generate a summary with the user questions, the data provided, and evidence that it did as instructed.
    Hopefully, you can see how RAG can help LLM overcome the above-mentioned challenges. First, with the incorrect data, we provided a data store with correct data from which the application can retrieve the information and send that to the LLM with strict instructions only to use that data and the original question to formulate the response. Second, we can instruct the LLM to pay attention to the data source to provide evidence. We can even take it a step further and require the LLM to respond with "I don't know" if the question can't be reliably answered based on the data stored in the vector database.

    How Does RAG Work?
    Retrieval augmented generation begins with identifying the data sources that you would like to use for your generative ai application to provide contextually relevant results. The content from the data sources is converted into vector embeddings, or a numerical representation of the data, using the chosen machine learning model and stored in a vector database. The application invokes a semantic search [semantic similarity search] with the vector database when it gets a query about the data, for example in a form of a question from a chatbot. This query is converted into a vector embedding which is then used in the semantic search [semantic similarity search] to find relevant documents or data. The search results, along with the initial query and a prompt are then send to the LLM to generate contextually relevant information retrieval.

    RAG vs. Fine tuning a Model
    Beginning with RAG is a suitable entry point, offering simplicity and potential adequacy for your applications. A sophisticated, prompt engineering approach will enhance the response even more. In contrast, fine-tuning serves a distinct purpose, mainly when the goal is to modify the behavior of the language model or adapt it to comprehend a different "language." These approaches can be complementary rather than mutually exclusive. A strategic approach involves fine-tuning to enhance the model's grasp of domain-specific language and desired output while leveraging RAG to elevate response quality and relevance.

    Addressing RAG Challenges Head-On
    LLMs in the Dark about Your Data
    Traditional LLMs are only trained on datasets they can access within their training cut-off points. This cutoff point renders the LLM's dataset static and prone to responding incorrectly or providing outdated information when faced with queries outside their training scope.

    AI Applications Demand Custom Data for Effectiveness
    Organizations require models to understand their domain and provide answers derived from their specific data to ensure LLMs deliver relevant and particular responses. This is crucial for applications such as customer support bots or internal Q&A bots, which must furnish company-specific answers without extensive retraining.

    Retrieval Augmentation as an Industry Standard
    RAG has become a standard industry practice in such a short time. By including relevant data as part of the prompt, RAG connects static LLMs with real-time data retrieval, surpassing the limitations imposed by static training data.

    Retrieval Augmented Generation Use Cases
    Below are the most popular RAG use cases:
    01. Question and Answer [(QA or Q&A)] Chatbots: Automated customer support and resolved queries by deriving accurate answers from [...] documents and knowledge bases.
    02. Search Augmentation: Enhancing search engines with LLM-generated answers to improve informational query responses and facilitate easier information retrieval.
    03. Knowledge Engine for Internal Queries: Enabling [users] to ask questions about company data [...].

    Benefits of RAG
    01. Up-to-date and Accurate Responses: RAG ensures LLM responses are based on current external data sources, mitigating the reliance on static training data.
    02. Reduced Inaccuracies and Hallucinations: By grounding LLM output in relevant external knowledge, RAG minimizes the risk of providing incorrect or fabricated information, offering outputs with verifiable citations.
    03. Domain-Specific, Relevant Responses: Leveraging RAG allows LLMs to provide contextually relevant responses tailored to an organization's proprietary or domain-specific data.
    04. Efficient and Cost-Effective: RAG is simple and cost-effective compared to other customization approaches, enabling organizations to deploy it without extensive model customization.

    Reference Architecture for RAG Applications
    01. Data Preparation: Gather content from your data sources, preprocess it, and split it into suitable lengths based on your chunking strategy, convert the data into vector embedding based on you embedding model of choice and downstream LLM application.
    02. Index Relevant Data: Produce document embeddings and generate a Vector Search index with this data. Vector databases will automatically create the index for you and provide a whole host of data management capabilities.
    03. Retrieve Relevant Data: Retrieve data relevant to a user's query and provide it as part of the prompt used for the summary generation by the LLM.
    04. Build AI Applications: Wrap prompts with augmented content and LLM querying components into an endpoint, exposing it to applications such as Q&A chatbots through a[n ...] API.
    05. Evaluations: Consistent evaluation of response effectiveness to queries. Ground truth metrics compare RAG responses with established answers. Conversely, metrics like the RAG Triad assess relevance between queries, context, and responses. LLM response metrics consider friendliness, harmfulness, and conciseness.

    Key Elements of RAG Architecture
    01. Vector Database: AI applications benefit from vector databases for fast similarity searches, ensuring access to up-to-date information.
    02. Prompt Engineering: Sophisticated instructions to the LLM to use only the provided content to generate the response.
    03. [Extract, Transform, Load (]ETL[)] Pipeline: An ingestion pipeline to help handle duplicate data, upserts, and any transformations (text splitting, metadata extractor, etc) required to the data before storing in the Vector Database
    04. LLM: [...].
    05. Semantic Cache: A semantic cache [...] that stores your LLM responses to reduce spend and increase performance.
    06. RAG tools: In RAG, using third-party tooling ([blacklisted tool 1, blacklisted tool 2,] Semantic Kernel, etc) helps build RAG models and are often LLM agnostic.
    07. Evaluation Tools & Metrics: Metrics, LLMs, and 3rd party tools to help with evaluations [...]
    08. Governance and Security of rag systems.

    Navigating the RAG Architectural Landscape
    In AI, companies see that Retrieval Augmented Generation is a game-changer, not just a tool. It seamlessly blends LLMs with custom data, delivering responses that are accurate and current and industry-specific. RAG leads AI towards a future where accuracy meets flexibility, and today's language models become tomorrow's smart conversationalists. [...]
    [...]"

    Comment
    We note

  • semantic similarity search,
  • verifiable,
  • domain-specific, specifically Domain-Specific Language (DSL),
  • contextual,
  • vector search index,
  • ground truth,
  • semantic cache, and
  • semantic kernel.

    1. Because of the coherent Ontologic Model (OM) and Ontologic roBot (OntoBot), we call it Conversational System (CS or ConS), including Conversational Agent System (CAS), MultiModal Dialogue System (MDS or MMDS), and Ontological Dialog System or Ontologic Dialog System (ODS), but not chatbot.
    2. Because of the coherent Ontologic Model (OM) (e.g. ANNM (e.g. ANNLM (e.g. LLM))), we call it Ontologic roBot (OntoBot), but not Large Language Model (LLM).
    3. Because of the Information Retrieval (IR) System (IRS) (e.g. algebraic IRM (e.g. vector space indexing, Latent Semantic Indexing (LSI))), Knowledge Retrieval (KR) System (KRS) (e.g. Semantic Search Engine (SSE)), we call it Ontologic Search (OntoSearch) and Ontologic Find (OntoFind).
    4. Because of the vector database, vector space index, vector search index, and Latent Semantic Analysis (LSA), specifically Vector Space Model (VSM) and Latent Semantic Indexing (LSI), and also semantic cache, we call it Ontologic storage Base (OntoBase) and Ontologic File System (OntoFS).
    5. Because of points 1., 2., 3., and 4., we also call it our Retrieval Augmented Generation (RAG) with Ontologic Model (OM), which is part of OntoSearch and OntoFind.

    We quote the document titled "GraphRAG: Unlocking LLM discovery on narrative private data" and publicized on the 13th of February 2024: "
    Perhaps the greatest challenge - and opportunity - of LLMs is extending their powerful capabilities to solve problems beyond the data on which they have been trained, and to achieve comparable results with data the LLM has never seen. This opens new possibilities in data investigation, such as identifying themes and semantic concepts with context and grounding on datasets. In this post, we introduce GraphRAG, created implemented by Microsoft Research, as a significant advance in enhancing the capability of LLMs.
    Retrieval-Augmented Generation (RAG) is a technique to search for information based on a user query and provide the results as reference for an AI answer to be generated. This technique is an important part of most LLM-based tools and the majority of RAG approaches use vector similarity as the search technique. GraphRAG uses LLM-generated knowledge graphs to provide substantial improvements in question-and-answer performance when conducting document analysis of complex information. This builds upon our recent research, which points to the power of prompt augmentation when performing discovery on private datasets. Here, we define private dataset as data that the LLM is not trained on and has never seen before, such as an enterprise's proprietary research, business documents, or communications. Baseline RAG[1] was created to help solve this problem, but we observe situations where baseline RAG performs very poorly. For example:

  • Baseline RAG struggles to connect the dots. This happens when answering a question requires traversing disparate pieces of information through their shared attributes in order to provide new synthesized insights.
  • Baseline RAG performs poorly when being asked to holistically understand summarized semantic concepts over large data collections or even singular large documents.

    To address this, the tech community is working to develop methods that extend and enhance RAG (e.g., a blacklisted tool). Microsoft Research's implementation of an old new approach, GraphRAG, uses the LLM to create a knowledge graph based on the private dataset. This graph is then used alongside graph machine learning to perform prompt augmentation at query time. GraphRAG shows substantial improvement in answering the two classes of questions described above, demonstrating intelligence or mastery that outperforms other approaches previously applied to private datasets.

    Applying RAG to private datasets
    [...] complexity [...]
    We start with an exploratory query, which we pose to both a baseline RAG system [for coherent Ontologic Model (OM)] and to our new the other Ontologics' approach, GraphRAG:
    [...]
    In these results, we can see both systems perform well - highlighting a class of query on which baseline RAG performs well. Let's try a query that requires connecting the dots:
    [...]
    Baseline RAG fails to answer this question. Looking at the source documents inserted into the context window (Figure 1), none of the text segments discuss Novorossiya, resulting in this failure.
    [...]
    In comparison, the GraphRAG approach discovered an entity in the query [...]. This allows the LLM to ground itself in the graph and results in a superior answer that contains provenance through links to the original supporting text.
    [...]
    By using the LLM-generated knowledge graph, GraphRAG vastly improves the "retrieval" portion of RAG, populating the context window with higher relevance content, resulting in better answers and capturing evidence provenance.
    Being able to trust and verify LLM-generated results is always important. We care that the results are factually correct, coherent, and accurately represent content found in the source material. GraphRAG provides the provenance, or source grounding information, as it generates each response. It demonstrates that an answer is grounded in the dataset. Having the cited source for each assertion readily available also enables a human user to quickly and accurately audit the LLM's output directly against the original source material.
    However, this isn't all that's possible using GraphRAG.

    Whole dataset reasoning
    We illustrate whole-dataset reasoning abilities by posing the following question to the two systems:
    [...]
    Baseline RAG struggles with queries that require aggregation of information across the dataset to compose an answer. Queries such as "What are the top 5 themes in the data?" perform terribly because baseline RAG relies on a vector search of semantically similar text content within the dataset. There is nothing in the query to direct it to the correct information.
    However, with GraphRAG we can answer such questions, because the structure of the LLM-generated knowledge graph tells us about the structure (and thus themes) of the dataset as a whole. This allows the private dataset to be organized into meaningful semantic clusters that are pre-summarized. The LLM uses these clusters to summarize these themes when responding to a user query.
    [...]
    As anticipated, the vector search retrieved irrelevant text, which was inserted into the LLM's context window. Results that were included were likely keying on the word "theme," resulting in a less than useful assessment of what is going on in the dataset.
    Observing the results from GraphRAG, we can clearly see that the results are far more aligned with what is going on in the dataset as a whole. The answer provides the five main themes as well as supporting details that are observed in the dataset. The referenced reports are pre-generated by the LLM for each semantic cluster in GraphRAG and, in turn, provide provenance back to original source material.

    Creating LLM-generated knowledge graphs
    We note the basic flow that underpins GraphRAG, which builds upon our prior research and repositories using graph machine learning:

  • The LLM processes the entire private dataset, creating references to all entities and relationships within the source data, which are then used to create an LLM-generated knowledge graph.
  • This graph is then used to create a bottom-up clustering that organizes the data hierarchically into semantic clusters [...]. This partitioning allows for pre-summarization of semantic concepts and themes, which aids in holistic understanding of the dataset.
  • At query time, both of these structures are used to provide materials for the LLM context window when answering a question.

    An example visualization of the graph is shown in Figure 3. Each circle is an entity (e.g., a person, place, or organization), with the entity size representing the number of relationships that entity has, and the color representing groupings of similar entities. The color partitioning is a bottom-up clustering method built on top of the graph structure, which enables us to answer questions at varying levels of abstraction.

    Result metrics
    The illustrative examples above are representative of GraphRAG's consistent improvement across multiple datasets in different subject domains. We assess this improvement by performing an evaluation using an LLM grader to determine a pairwise winner between GraphRAG and baseline RAG. We use a set of qualitative metrics, including comprehensiveness (completeness within the framing of the implied context of the question), human enfranchisement (provision of supporting source material or other contextual information), and diversity (provision of differing viewpoints or angles on the question posed). Initial results show that GraphRAG consistently outperforms baseline RAG on these metrics. 
    In addition to relative comparisons, we also use [a self reflective plagiarism of our OntoBot] to perform an absolute measurement of faithfulness to help ensure factual, coherent results grounded in the source material. Results show that GraphRAG achieves a similar level of faithfulness to baseline RAG. [...]

    [...]"

    Comment
    So, so "created by Microsoft". Really created by ... or is this a typo and was meant to be implemented by? We think that we are not too sensible in this case.

    In the related messages, notes, explanations, clarifications, investigations, and claims we already said that

  • on the one hand our Evoos has integrated all basic parts created by others and us, and
  • on the other hand no decisive prior art exists.

    We also note that what is wrongly called RAG is also a part of a Dialogue Manager (DM or DiaM) of a Conversational System (CS or ConS), and we have already proven that we created both, what is wrongly called

  • naive or baseline RAG based on vector space indexing and Latent Semantic Indexing (LSI),
  • advanced RAG based on metadata, etc.,
  • modular RAG, including graph RAG, based on KG, SE, (distributed parallel) LSI, Semantic Search Engine (SSE), Rewriting Logic, Evolutionary Computing, Swarm Computing, etc., and
  • multimodal RAG

    with our Evoos with its coherent Ontologic Model (OM), including Language Model (LM), MLModel, ANNModel, including ANNLM, including LLM and MultiModalANNM, Foundational Model (FM), including Foundation Model (FM), Information Retrieval Model (IRM), etc., and our OS with its OS Architecture (OSA), OS Components (OSC), and all the rest.
    And all that other blah blah blah is irrelevant from the legal point of view.

    We also note that the statements about the initial condition are wrong.

    In fact, one cannot compare both approaches in this way at all. While the LLM is trained on a vast amount of data, the algebraic IRMs (e.g. VSM and LSI) are only built on some few data. If the algebraic IRMs are built on the same vast amount of data with a Large Vector Space (LVS), then the results of the examples will be comparable or even better in case of the VSM and LSI approaches, because the latter do not hallucinate. Algebraic IRMs are extremely powerful, specifically if one also adds KG to express contexts, ontologies (Topic Maps (TMs), and language models, and also Fuzzy Logic (FL) (e.g. ANN, GA, and BN) and other non-classical logics, and our enhanced Fuzzy Logic (eFL) (e.g. integrated ANN, EC, etc.).

    For a better understanding, we also quote once again a wiki hypertext publication about the subject language model: "Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, furtherly causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net.[15]"
    But only in the naive implementation with a matrix. Furthermore, like LSI based on SVD, an ANNLM has to be rebuild completely.

    We also note that SSE already works on semantic structures or knowledge representation formalisms, including Semantic Network (SN) and Knowledge Graph (KG) (e.g. triplet, RDF Graph, and OWL Graph). Therefore, no need exists to generate a KG with an LLM at all.

    We also note that the first example is wrong. Proper utilization of Vector Space Model (VSM) and Latent Semantic Indexing (LSI) would have resulted in a sufficiently high amount of semantically similar text content.
    We also note that the second example is wrong. As we said before, SSE already works on an SN, KG, etc..

    Maybe the authors have already heard about the Semantic (World Wide) Web (SWWW), and other standards and technologies for the definition, exchange, and syndication of Metadata, Linked Data (LD), Topic Maps, Ontologies, and Ontologics, and so on? See also the webpage Introduction of the website of OntoLinux.

    It is also related to the ontology-based or enhanced Topic-based Vector Space Model (eTVSM), specifically the synonym eTVSM, which "can represent semantic term relations to model common linguistic phenomena by representing these relations in a domain ontology. Such semantic term relations are then mapped by the eTVSM onto the vector angles in the operational vector space. Furthermore, the eTVSM proposes a formal procedure for deriving term angles from an eTVSM ontology model which now becomes a central place for semantic term relations modeling". "[...] a possible principle of automatic domain ontology construction [...] exploits the synonymy semantic relations given by the WordNet."
    The LLM is used as a source of semantics to provide relations for the KG (e.g. synonyms), which is used as Topic Map (TM) or domain ontology in particular and for Information Retrieval (IR) and Information Filtering (IF) (e.g. RecS) in general. Doing this without the provision of an explicit domain ontology or with the utilization a KG also shows that it is our ontologic approach.
    It took them at least 18 years, when we publicized the webpage Introduction (last sentence) of the website of OntoLinux in the end of October 2006, and again 10 years, when we publicized more direct informations in the Ontonics, OntoLab, Ontologics, OntoLix and OntoLinux Further steps of the 20th of February 2014, to find out for example the document titled "Modelle zur Repräsentation natürlichsprachlicher Dokumente. Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken==Models for the Representation of Natural Language Documents. Ontology-based Information Filtering and Retrieval with relational Databases" and publicized in 2003 and as book in 2004.
    Too bad that we have all approaches in this specific relation to our RAG, such as for example "graph machine learning", as can also be seen with the section Softbionics and Artificial Intelligence 3 of the website of OntoLinux and the Ontonics, OntoLab, Ontologics, OntoLix and OntoLinux Further steps of the 20th of February 2014, where we said that "[w]e are also looking at some other packages in the fields of Machine Learning (ML), Data Mining (DM), and NLP, as well as basic graph processing and analytics".

    The approach of "mapping the question across each group to create community answers, and reducing these into a final global answer" is just only (Semantic (World Wide) Web (SWWW)) reasoning by Swarm Intelligence (SI), Common Sense Computing (CSC), and both, which were also created with out OS and also reminds us of architectures of bioholonic Cognitive Agent System (CAS), Holonic Agent System (HAS), and Multi-Agent System (MAS) agency or Agent Society (AS) included in oiur Evoos.

    At this point, we also reveal or recall the following:
    If one looks at the document titled "Natural Language Processing with Subsymbolic Neural Networks" (1997) once again, specifically its chapters

  • 2.1 Properties of Subsymbolic Representations
    Subsymbolic (i.e. distributed) representations have properties that are very different from the symbolic representations: (1) The representations are continuously valued; (2) Similar concepts have similar representations; (3) The representations are holographic, that is, any part of the representation can be used to reconstruct the whole; and (4) Several different pieces of knowledge are superimposed on the same finite hardware.", and
  • 3.1 Symbols vs. Soft Constraints
    [...] people are not pure symbol processors when they understand language. Instead, all constraints - grammatical, semantic, discourse, pragmatic - are simultaneously taken into account to form the most likely interpretation of the sentence. This behavior is difficult to model with symbolic systems, but it is exactly what neural networks are good at.",

    then one can already see why the brute force approach must fail. The simultaneous holographic representation of all constraints is also the reason for the deficitis of this brute force approach presented with the FGREP method, mechanism, and architecture and the DISPAR and DISCERN systems based on FGREP.

    We also recall that "[...] when ranking a larger set of candidate documents, we find the embeddings-based approach is prone to false positives, retrieving documents that are only loosely related to the query." The problem with false positives is also known as the system immanent hallucination problem.

    From the point of view of informatics and bionics it shows that solving more and more the deficits of the brute force approach only leads more and more to logic-based AI, specifically for reasoning and interferencing.
    From the point of Algorithmic Information Theory (AIT) it makes no sense to build, train, customize, fine-tune, improve, operate, manage, keep under control, and maintain a simulating and hallucinating illusionary LLM, because doing so only increases complexity and doing the same with a coherent OM is less time and space consuming and has no loss of quality and quantity. If one already has a semantic structure and reasoning on it for NLU, then one only needs syntactic sugar around it for NLP, basically for speech recognition and speech synthesis, which does not require an LLM, but only a grammar, as also shown with bioholonics (see also the Clarification of the 14th of April 2024 and the note x-o-x-o-x-o-x-o-x of the 27th of June 2024 (keyword common base)).
    A (dedicated or specialized) LLM can be used for syntatic sugar and variations of a response, if at all.

    The subsymbolic, connectionist, probabilistic, and statistic brute force approach (e.g. Machine Learning (ML), including Artificial Neural Network (ANN), including Deep Learning (DL or DeepL)) does not work and has failed again and again with announcement and warning.

    Eventually, LLM is a dead-born child. See once again the Clarification of the 29th of May 2024 and the Clarification of the 2nd of August 2024 (keyword Turing complete).
    We also already said that they are running in circles and the path of the circle closed.

    But one can also already see the utter nonsense of LLM.
    "A Neural Probabilistic Language Model" (2000): "Experiments indicate that learning jointly the representation (word features) and the model makes a big difference in performance." But only by compressing the LM in a way that is only half precise and requires extensive work and integrated systems to cure the deficits. In total we get a worse performance in addition to uncertainty.
    "A Neural Probabilistic Language Model" (2003): "learns simultaneously (1) a distributed representation for each word (i.e. a similarity between words) along with (2) the probability function for word sequences, expressed with these representations"
    And then they use an LLM to get a vector space index or KG of the semantic similarity between words and sentences with all the system immanent deficits of CN or ANN. And they already have distributed word representations or embeddings.
    Why not simply take a graph, a grammar, a Semantic Network (SN), an ontology, a Topic Map (TM), a KG, etc., and do advanced graph processing, VSM, LSI, Fuzzy Logic, Rewriting Logic (RL), etc. across words, sentences, and documents?

    We also note that utilizing a simulating, illusionary, and hallucinating LLM for generating a KG is not really convincing, because this puts the fox in charge of the henhouse and is a reciepe for chaos. Is not it?
    Fighting fire with fire is an approach, that often works. But fighting fire by putting oil into it only multiplies the chaos.
    Guess why we start with ontology, logics, and Ontologics, and also a Zero Ontology or Null Ontology.

    We could continue with

  • hypergraph, which "is a generalization of a graph in which an edge can join any number of vertices" and "Formally, a directed hypergraph is a pair (X, E) where X is a set of elements called nodes, vertices, points, or elements and E is a set of pairs of subsets of X", while "An undirected hypergraph is also called a set system or a family of sets drawn from the universal set",
  • coherent Ontologic Model (OM), including Information Retrieval Model (IRM), including
    • IRM with set-theoretic mathematical basis,
      • without term-interdependencies (e.g. Standart Boolean Model), and
      • with transcendent term-interdependencies (e.g. Fuzzy Set Model (FSM), "Fuzzy Logic (FL) = Computing with Words" (1996), which is beyond Computational Linguistics (CL)),

      in addition to

    • IRM with algebraic mathematical basis,
      • without term-interdependencies (e.g. Vector Space Model (VSM)),
      • with immanent term-interdependencies (e.g. Generalized Vector Space Model (GVSM), Latent Semantic Index (LSI), etc.), and
      • with transcendent term-interdependencies (e.g. Topic-based Vector Space Model (TVSM), ontology-based or enhanced TVSM (eTVSM), Artificial Neural Network (ANN) trained by backpropagation method (e.g. Feedforward Artificial Neural Network (FANN or FNN)), etc.)

      and

    • IRM with probablilistic mathematical basis,
      • without term-interdependencies (e.g. Inference Network Model (INM), including Belief Network Model (BNM), and Language Model (LM)),
      • with immanent term-interdependencies (e.g. Language Model (LM)), and
      • with transcendent term-interdependencies (e.g. (retrieval by) logical imaging),
  • "Fuzzy Logic, Neural Networks, and Soft Computing" (1994), Soft Computing (SC), partnering for synergies with
    • Fuzzy Logic (FL), later whole Multi-Valued Logics (MVL) (by Evoos),
    • neural network theory or neural computing respectively Artificial Neural Network (ANN), later Reflective Neural Network (RNN) or Feedback Neural Network (FNN) (by Evoos), and
    • Probabilistic Reasoning (PR), later whole Probabilistic Model (PM or ProM), with PR subsuming
      • Bayesian Network or Belief Network (BN or BNet),
      • Genetic Algorithm (GA), later whole Evolutionary Computing (EC) (by Evoos),
      • chaotic system, and
      • parts of learning theory,
  • Philosopy
    • Ontology,
    • Epistemology, also called knowledge theory, examining nature, origin, and limits of knowledge, delineating the boundary between justified belief and opinion,
    • etc.
  • integration of subsymbolic and symbolic systems
    • "Modular Integration of Connectionist and Symbolic Processing in Knowledge-Based Systems" (1994) and
    • "Neural Fuzzy System" (1996),
  • Swarm Computing (SC) or Swarm Intelligence (SI),
  • Computational Intelligence (CI) (SC with SI),
  • Common Sense Computing (CSC),
  • Web Ontology Language (OWL) for main sentence with subordinate clause,
  • multidimensional Ontologic Model (OM) with FL values, LSI vector values, OWL, etc. for all kinds of our Retrieval Augmented Generation (RAG) with OM (e.g. IRM, Foundational Model (FM) (e.g. Foundation Model (FM)), MLM, ANNM (e.g. ANNLM (e.g. LLM, MMANNM)), Cognitive Model (CM or CogM), etc.)
  • multidimensional, scalable Ontologic data storage Base (OntoBase), which means HyperGraph DataBase (HGDB) and Vector DataBase (VDB) always included,
  • etc., etc., etc..

    All is formal, logic, validated and verified, rational, and so on, or simply said ontologic.
    The rest is merely a matter of sufficient hardware (e.g. computing power, memory, storage, and bandwith) and of course energy.

    Guess why we call it Ontologics.

    At this point, one can see that everything is related somehow. Required is an superordinate architecture, a Cognitive System architecture (e.g. bioholonics), Multi-Agent System (MAS) architecture (e.g. Agent Society (AS), Holonic Agent System (HAS)), and operating system architecture such as our Evoos Architecture (EosA) and our OS Architecture (OSA).

    "Probabilistic Latent Semantic Indexing" (1999)

    Also very interesting, one can nicely see that our Retrieval Augmented Generation (RAG) with OM is

  • related to Semantic Search Engine (SSE),
  • similar to Dialog System (DS or DiaS) and Conversational System (CS or ConS), specifically Dialogue Manager (DM or DiaM),
  • ideally based on ontology, Semantic Net (SN), and KG,
  • based on semantic similarity search and related to ontology-based Vector Space Model (VSM), and
  • based on our OntoBot, OntoBase (e.g. tuple store, graph store, and K-V store recently also called vector store or vector database), OntoFS, OntoSearch and OntoFind, and other (multidimensional, multilingual, multiparadigmatic, multimodal, multimedia) OSComponents (OSC), and their integrations by our (multidimensional, multilingual, multiparadigmatic, multimodal, multimedia) OS Architecture (OSA) all in one by what we described as liquid property (e.g. Blackboard-Based System (BBS), atoms, molecules, APIs, CHemical Abstract Machine (CHAM), Tuple Space System (TSS), etc.), which means power set, including all possible variants of combinations, integrations, unifications, and fusions of the basic functions with APIs of referenced technologies, goods, and services. What of the billions, trillions, or more of them makes sense in practice, is a different problem solved with other parts (e.g. AI, Common Sense Computing (CSC), etc.).
    We are not so stupid and have left anything as legal loophole. This is the 21st century.

    Indeed, it is interesting to take the embedding in vector space for the semantic similarity search. But the latter already proves that the embedding represents the semantic similarity, coherence, and the relationship of entities respectively words, terms, phrases, etc. of a language and also the contexts and the ontologies beside the language models respectively the ontological frames, and the Topic Maps (TMs).
    The same holds, if using an ANNLM to build a KG. But the latter already proves that the ANNLM learned the semantic similarity, coherence, and relationship of entities respectively words, terms, phrases, etc. of languages and also the contexts and the ontologies beside the language models respectively the ontological frames, and the Topic Maps (TMs). Large Language Model "acquire knowledge about syntax, semantics, and ontologies [...] inherent in human language corpora, [including term semantic relations,] but they also inherit inaccuracies and biases present in the data they are trained on."
    At this point, we are also at Topic-based Vector Space Model (TVSM), and ontology-based or enhanced TVSM (eTVSM). But like the Arrow System had no integration with ANN, at least TVSM and definitely eTVSM also had no integration with ANN, {correct? utilization of ontology} but also no integration with KG. This leads the discussion back to the fields of chatbot, virtual assistant with voice user interface (voice assistant), Dialog System (DS or DiaS), and Conversational System (CS or ConS).
    Eventually, it is merely another variant of the same original and unique expressions of idea included in our Evoos with its Evoos Architecture (EosA) and our OS with its OS Architecture (OSA), and OntoBot, OntoBase, OntoSearch and OntoFind, OntoBlender, and other OS Components (OSC).

    We also note

  • RAG of DiaM is important part of most LLM-based tools, as also shown in the Clarification of the 2nd of August 2024,
  • complexity,
  • trust,
  • verify,
  • evidence,
  • provenance,
  • coherent,
  • ground itself in the graph.

    That is really a bad attitude of the authors.

    For sure, one could argue to take the Semantic (World Wide) Web (SWWW) standards and technologies, and a MultiModal Dialogue System (MMDS), specifically an ontology-based MMDS. But

  • "Semantic Web Goes Mainstream" and "SmartGuide: An Intelligent Information System basing on Semantic Web Standards" only came in 2002,
  • "The SmartKom Mobile Multimodal Dialogue System" only came in 2002 to 2004,
  • "SmartWeb [...]" only came in 2004 to 2007, and
  • "Ontology-based infrastructure for intelligent applications" Multimodal Dialogue System only came in 2003,

    and are already based or even partial plagiarisms and fakes of our Evoos, which integrates

  • model-reflection,
  • Graph-Based Knowledge Base (GBKB) or Knowledge Graph (KG),
  • Information System (IS), including Knowledge Management System (KMS),
  • Expert System (ES),

    which expresses contexts, ontologies, and language models, and also,

  • logics,
  • bionics (e.g. AI, ML, CI, ANN, ABS, MAS, HAS, CogAS, etc.),
  • and much more
  • Distributed operating system (Dos),
  • Virtual Machine (VM),
  • coherent Ontologic Model (OM) with ANNModel, ANNLanguage Model, MultiModalANNModel, etc.,
  • MultiModal User Interface (MMUI) with Spech Processing (SP), and
  • Cognitive System (CogS) responsible for the whole Evoos in general and the Conversational System (ConS) and MMDS in particular,

    and integrated in our OS (see also the webpages Overview and Links to Software of the website of OntoLinux).

    It is not only about the integration of subsymbolic and symbolic systems, but also about the integration of GBKB or KG and Agent-Based System (ABS).

    In case of our integrating OS Architecture (OSA), the details of an implementation are irrelevant, because proper referencing of the original and unique expression of idea created by C.S. is always required due to the exclusive moral rights respectively Lanham (Trademark) rights of C.S..
    But when a publication

  • provides sufficient evidences to show a causal link with the original and unique expression of idea created by C.S. and
  • has taken said expression of idea as source of inspiration and blueprint,

    then also proper allowance and licensing of the performance and reproduction of said expression of idea is always required due to the copyright of C.S. and our corporation.

    The legal situation is crystal clear in case of our next Game Changer™ wrongly and illegally called RAG taken alone and in combination and in integration with the other parts of our OS, and therefore should be obvious even for a person, who is not a Person of Ordinary Skill In The ART (POSITA), also known as laywoman or layman.
    Eventually, it does need any detailed explanations anymore to recognize that it is our Evoos and our OS.
    Any entity, that cannot understand or does not want to understand the true legal situation, has a serious problem.

    The status as Multi-Game Changer™, New RenAIssance, New AI, etc. proves that our expressions of ideas constitute sui generis works of art and our claims are completely justifiable and enforceable.

    By the way:

  • Also note that the dirty tricks (e.g. abuse of market power, blackmailing and provoking very expensive and very long lasting legal procedures at the courts, etc.) do not work, because we are talking about serious criminal activities and conspiracies, and we will get preliminary injunctions, which last until the end of said long lasting legal procedures at the courts by the prosecutors.
  • Also note that what is wrongly called Cloud-native technologies (Cnx) and Conversational Artificial Intelligence (CAI or ConsAI), and also their integrations, applications, and services already belong to our Evoos and our OS, and are provided with the exclusive infrastructures of our Society for Ontological Performance and Reproduction (SOPR) and our other Socities with their set of foundational and essential facilities, technologies, goods, and services, specifically SoftBionics as a Service (SBaaS).
    And if this does not work as we do want it to, then we will show that providing SBaaS and other parts of these essential facilities interfere with, and also obstruct, undermine, and harm the exclusive moral rights respectively Lanham (Trademark) rights of C.S.. As result there will be nothing else at all.


    12.August.2024

    00:00, 10:31, 13:20, and 16:07 UTC+2
    Illegal FOSS will be removed

    Free and Open Source Software (FOSS)

    Alphabet (Google) Google Cloud Foundational Open Source Projects, specifically

  • gRPC: RPC framework
    Ontologic System (OS), Distributed operating system (Dos),
  • gVisor: Secure container runtime
    operating System Virtual Machine (osVM), operating system-level Virtualization (osV) or containerization, OntoL4Linux,
  • Istio: Connect and secure services
    Distributed operating system (Dos),
  • Knative: Serverless framework for Kubernetes
    Distributed operating system (Dos),
  • Kubeflow: ML toolkit for Kubernetes
    Ontologic System (OS), Ontologic roBot (OntoBot),
  • Kubernetes: Management of containerized applications
    In-Kernel Berkeley Databases (KBDB) (in-memory and file) key-data store, Key-Value (KV) store, Hash Table, etc., Distributed operating system (Dos), Cluster Computing (ClusterC), BlackBoard System (BBS), Multi-Agent System (MAS), Associative Memory (AM),
  • OpenCensus: Cloud native observability [and analysis(!?)] framework
    Autonomic Computing (AC), monitoring, tracing, logging, continuous optimization,

    and other illegal Free and Open Source Software (FOSS).

    Microsoft GitHub projects, specifically

  • Distributed application runtime (Dapr) system (16th of October 2019),
    "Dapr is a portable, serverless, event-driven runtime that makes it easy for developers to build resilient, stateless and stateful microservices that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
    Dapr codifies the best practices for building microservice applications into open, independent, building blocks that enable you to build portable applications with the language and framework of your choice."
  • Semantic Kernel (25th of April 2023),
    "Semantic Kernel is a lightweight, open-source development kit that lets you easily build AI agents and integrate the latest AI models into your C#, Python, or Java codebase. It serves as an efficient middleware that enables rapid delivery of enterprise-grade solutions. [...]
    Semantic Kernel provides you with the infrastructure to build any kind of agent you need without being an expert in AI."
    "Semantic Kernel is an SDK that integrates Large Language Models (LLMs) [...] with conventional programming languages [...].
    What makes Semantic Kernel special [...] is its ability to automatically orchestrate plugins with AI. With Semantic Kernel planners, [a user] ask an LLM to generate a plan that achieves a user's unique goal. Afterwards, Semantic Kernel will execute the plan for the user."
  • GraphRAG (2nd of July 2024)
    2 illegal FOSS projects were released after the publication of the Clarification of the 29th of May 2024 and all of our other related publications,
  • etc..

    And what was that with SearchGPT? Microsoft had a seat on the board and knew what they are doing.

    We also demanded once again to stop providing

  • computing power for illegal activities, and
  • Large Language Models (LLMs) as illegal FOSS and for illegal FOSS projects.

    And if certain entities refuse to do so, then remove their Ontologic Applications and Ontologic Services (OAOS) from the

  • infrastructure of the Ontoverse (Ov), including
    • facilities (e.g. buildings, data centers, exchange points or hubs, and transmission lines)
    • technologies (e.g. models, environments, systems (e.g. backbones, core networks, or fabrics, and satellite constellations), platforms, frameworks, components, and functions, and also Service-Oriented technologies (SOx) (e.g. as a Service technologies (aaSx))),
    • goods (e.g. contents, data, software (e.g. applications), hardware (e.g. processors), devices, robots, and vehicles), and
    • services (e.g. as a Service (aaS) business, capability, and operational models),

    and the

  • access places and access devices (e.g. Ontoscope (Os) variants).

    Usually, they do not have any scruples about walking over corpses. So go on, but from now on in the right direction.

    They do not change their attitudes and their strategies, and therefore they do not change their goals. They are still giving our original and unique works of art away as FOSS to protect and support their core businesses instead of removing illegal FOSS, they are adding even more, and instead of supporting us in protecting the rights and properties of C.S. and our corporation, they are supporting other entities to infringe them.
    That total nonsense based on plagiarism and freeloading has even become the norm of the competition and hence a race to the bottom.
    That is absolutely intolerable, incomprehensible, and inwhatsoeverble.

    We do not barter our rights and properties for (the majority shares of) their companies, because we do get them as damage compensations or in a truly legal competition anyway, just as we do get all illegal materials back under our control.
    See also the notes

  • Without complete damages no essential facilities of the 13th of June 2024 and
  • SOPR will not give up on controls and damages of the 19th of June 2024.

    Since some weeks, our Society for Ontological Performance and Reproduction (SOPR) is already thinking once again about additional measures to increase the motivation and the happiness.

    15:17 UTC+2
    Franz Inc. blacklisted

    no triple store with integrated subsymbolic and symbolic system, specifically unified subsymbolic and symbolic system, also known as connectionist symbol processing system, or subsymbolic Artificial Intelligence (AI) system, ... or integrated connectionist model (e.g. neuro-symbolic AI) and definitely not as as a Service (aaS)

    15:18 UTC+2
    Zilliz blacklisted

    no what is wrongly and illegally called RAG and definitely not as as a Service (aaS)

    15:37 UTC+2
    Aerospike blacklisted

    no NoSQL store with integrated subsymbolic and symbolic system, specifically unified subsymbolic and symbolic system, also known as connectionist symbol processing system, or subsymbolic Artificial Intelligence (AI) system, ... or integrated connectionist model (e.g. neuro-symbolic AI) and definitely not as as a Service (aaS)

    15:49 UTC+2
    Ontotext blacklisted

    no Graph DataBase (GDB) with integrated subsymbolic and symbolic system, specifically unified subsymbolic and symbolic system, also known as connectionist symbol processing system, or subsymbolic Artificial Intelligence (AI) system, ... or integrated connectionist model (e.g. neuro-symbolic AI) and definitely not as as a Service (aaS)

    By the way:

  • One always meets twice in life. In this case it is already the third time.
    And the same holds for all the other ontotrolls from the Deutsche Forschungsinstitut für Künstliche Intelligenz (DFKI), University of Karlsruhe, University of Innsbruck, and so on. They got 25 years to contact us, talk with us, and ask us for allowance and license.
    We are also observing the company DeepL, and also the governments of the member states of the European Union (EU), and the European Commission (EC).
  • And once again: Social market economy is not socialism. Democracy is not tyranny of the court state.

    15:49 UTC+2
    Onlim blacklisted

    no what is wrongly and illegally called Conversational Artificial Intelligence (CAI or ConAI) and RAG, and no KG with integrated subsymbolic and symbolic system, specifically unified subsymbolic and symbolic system, also known as connectionist symbol processing system, or subsymbolic Artificial Intelligence (AI) system, ... or integrated connectionist model (e.g. neuro-symbolic AI) and definitely not as as a Service (aaS)

    15:55 UTC+2
    Adesso has to comply with the ToS

    Terms of Service (ToS)

    15:59 UTC+2
    Accenture has to comply with the ToS

    Terms of Service (ToS)

    16:02 UTC+2
    Capgemini has to comply with the ToS

    Terms of Service (ToS)

    16:03 UTC+2 UTC+2
    msg systems has to comply with the ToS

    Terms of Service (ToS)

    16:04 UTC+2 UTC+2
    Tata Consultancy Services has to comply with the ToS

    Terms of Service (ToS)


    14.August.2024

    15:05, 16:07, and 18:22 UTC+2
    Illegal GC, CC, CEFC, CNx, OM, xGPTx, LLM, CAI, RAG, etc. will be removed

    Grid Computing (GC or GridC),
    Cluster Computing (CC or ClusterC)
    Cloud, Edge, and Fog Computing (CEFC)
    Cloud-native technologies (Cnx)
    Ontologic Model (OM)
    Technologies based on Generative Pre-trained Transformer technology (xGPTx)
    Large Language Model (LLM)
    Conversational Artificial Intelligence (CAI or ConsAI)
    Retrieval Augmented Generation (RAG)
    et cetera (etc.)

    By the way:

  • We recommend to not cite scientific papers, which are related to these fields and were publicized after the year 2007, and to be very careful with scientific documents, which were publicized between the year 2000 to 2006. Virtually most, which was publicized after the year 2005, has to be taken very carefully, and virtually everything, which was publicized after the year 2014, is even more or less plagiarism and rubbish, and does not provide any legal security. In fact, that specific scientific research in the fields of computer sciences, informatics, engineering, and so on has the level of truth and fact of the free and lying press and fast food marketing.
    Countries, which are funding scientific research with the goal to mislead the public about the true origin of the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in th oeuvre of C.S., will face considerable countermeasures by our Society for Ontological Performance and Reproduction (SOPR) such as for example 100 to 300% on the royalties and exclusion from foundational and essential facilities, technologies, goods, and services.

    19:07 UTC+2
    Breakup of Alphabet (Google) highly unlikely

    A breakup of the company Alphabet (Google) is highly unlikely, because all legal issues can be cured by relatively simple actions, such as

  • signing a cease and desist declaration for paying for exclusive service placement, and
  • providing a choice screen with 4 search engines.

    This was also sufficient to avoid a breakup of the company Microsoft in case of its web browser in particular, and therefore every other measures would completely overreach the powers of the government and the federal authorities.

    It is just a wet pipedream of certain entities.

    In case of ChromeOS and Android the situation is even different than incompetent entities are able to understand. For example,
    Android is an operating system, which is based on the Linux kernel and distributed as FOSS, and
    Android with Google Services is a partial Ontologic System (OS) variant, which belongs to us and will substitute Android with Google Services with our exclusive and mandatory implementation (see also the note SOPR considering temporary OntoBot choice screen of the 7th of August 2024).
    ChromeOS is even not so important and will also be substituted with said OS implementation.

    In short, debate over.

    22:51 UTC+2
    Gemini for all models?! Always for free?!

    We quote and translate a report, which is about Alphabet (Google) Pixel Ontoscope (Os) variants: "With the purchase of the Pro models, one gets a subscription to the [partial Ontologic roBot (OB or OntoBot) variant] Gemini Live for one year."

    Comment
    Gemini Advanced, Gemini Live part of One AI Premium Plan.
    Since some months, we are observing how companies are monetizing our OntoBot despite we have the exclusive right to do it and have not given any allowance and license.
    The industry consensus is 20 U.S. Dollar each month for the advanced or professional utilization. #-?

    Another subscription plan is structured as follows:

  • Free: 0 U.S. Dollar; 3 questions per month
  • Basic: 5 U.S. Dollar; 50 questions per month
  • Pro: 20 U.S. Dollar; unlimited questions per month; file attachements

    for utilizing 3 Ontologic Models (OMs) simultaneously, and choosing and returning one of the 3 results.
    Why has the company access to the OMs at all?
    Why is the company competing against the other Ontologic Applications and Ontologic Services Providers (OAOSPs) at all?
    Has it for example 1 user account at each of the 3 OAOSPs (60 U.S. Dollar cost plus some U.S. Dollar cloud and as a Service costs) for 9 own users (180 U.S. Dollar revenue) or even a more elegant management?
    Very oversmart (not really).

    Honestly, we have not intended to ask endusers for a fee.
    But from the general commercial point of view we find the subscription plans interesting.
    But we are not sure if such plans harmonize with the License Model (LM) of our Society for Ontological Performance and Reproduction (SOPR).
    Eventually, the final way of exploitation will also apply to Alexa, Copilot, and Siri, and their users, because after all:

  • There is only one OS and Ov of the 19th of September 2023,
  • OS and Os are originals of the 24th of January 2024,
  • There is only one OS and Ov #2 of the 1st of April 2024, and
  • OSC standard and mandatory with OsC of the 26th of July 2024.

    See also the note SOPR considering temporary OntoBot choice screen of the 7th of August 2024.

    23:43 UTC+2
    Choosy Chat blacklisted

    Illegal performances and reproductions of our original and unique, copyrighted Ontologic roBot (OntoBot) in whole or in part are prohibited in general.

    Also note that our OntoBot is part of the exclusive and mandatory infrastructures of our Society for Ontological Performance and Reproduction (SOPR) and our other Societies.


    16.August.2024

    00:00, 14:14, and 22:10 UTC+2
    % + %, % OAOS, % HW #16

    The criminal infringements of the rights and properties of C.S. and our corporation and the various conspiracies have already been discussed in the past and need no more elaboration.

    In the moment, when they have to make compromises and strive for the new constructive joint steps, they take half-hearted actions and fall back on the old destructive dirty tricks and fraudulent methods instead, particularly by abusing their market power and exploiting their (illegal) monopolies to blackmail and ultimately dictate and enforce their terms and conditions, which is populistically marketed as the establishment of maximal freedom, openess, and fairness respectively a level playing field, and democratization. Eventually, they are only stealing and giving away the personal rights and private properties of C.S. and our corportion to protect their monopolies and walled gardens and also their cliques.
    Obviously, all that only makes sense, if the establishment of a Joint Venture (JV) is rejected, and only shows that our

  • explanations of the legal situation and the consequences, warnings, and threats do not seem to be having the desired effect and are not bearing fruit and
  • suggestions for the solution and our whole endeavour cannot succeed

    with such an irrational and ridiculous way of thinking.
    Indeed, that is what we already expected.

    And despite it usually does not help with such persons, we can only repeat once again that we are preparing the set of legal documents, which can also be filled at the federal authorities and courts, and include applications for interim injunctions.
    In this relation, we noticed that U.S.American companies only invest in data centers and start-ups, and also support their old business partners and partners in crime in Europe. But the latter is called conspiracy, corruption, and so on.

    For sure, we will react on that and make the overall procedure more formal and expensive. Specifically, we are considering to

  • file lawsuits and
  • apply for criminal prosecutions

    before we begin to negotiate, or being more precise, determine the height of damage compensations, the details of agreements and contracts, and whatsoever, and not the other way round.

    In this conjunction, 3 specific applications for criminal prosecution are under special consideration, which can be carried out in parallel to the matter subject to private law and are part of the various organized cirminal acts, including conspiracies, which are completely documented:

  • illegal Free and Open Source Software (FOSS) projects,
  • blackmailing, and
  • corruption.

    Waging war is not permitted in a state under the rule of law, specifically not in order to define and improve one's legal and commercial position. But we are very sure to improve our already excellent legal position by legal actions even more.

    If Microsoft is out, then the initial fast track damage compensation option on the basis that at least Microsoft and Alphabet (Google) agree is void.
    Indeed, Amazon was added as third entity, but substituting Microsoft with Amazon is virtually the same situation.

    Furthermore, the option to sue and prosecute, trigger insolvencies, take over the bankrupt's assets, and then go public with our companies is becoming more interesting once again, specifically when we talk about a volume higher then 10 trillion U.S. Dollar.

    Alphabet (Google)
    +1% for lack of provision for future or foreseeable events or accrued liabilities, and alternative extraordinarily high investment.
    +1% for latest stunt in relation to bionics.
    +1% for designation of illegal FOSS, specifically in relation to what is wrongly and illegally called Cloud-native, in misleading way regarding our foundational OS.
    63% if we count correctly

    Microsoft
    +1% for nonsense like Atlassian, support of fraudulent third parties, and so on.
    +1% for no inclusion of OpenAI instead of separation stunt well knowing the fraudulent business goal, strategy, plan, trick, activitiy, and so on, and conspiring with OpenAI, OpenAI SearchGPT does not came suddenly, but when Microsoft was a member of its management board, and lack of provision for future or foreseeable events or accrued liabilities, and alternative extraordinarily high investment.
    +1% for latest stunt in relation to FOSS and our semantic kernel and Retrieval Augmented Generation (RAG).
    +1% for etiquette,
    The latest criminal action and provocation is only stupid, mildly said.
    68% if we count correctly

    >Amazon
    +1% for LLM start-ups, Amazon and J. Bezos invested in Anthropic and {hosts or invests or both?} Mistral and Perplexity as well, and Perplexity uses also the illegal LLM of Mistral.
    +1% for Perplexity.AI, Amazon Web Services hosts the crawler of Perplexity.
    63% if we count correctly

    Apple
    We demand complete damage compensations for iPod, iPhone, iPad, Apple Watch, etc., and MacOS, iOS, WatchOS, VisionOS, Siri, etc..

    Meta (Facebook) and all other companies concerned
    We demand complete damage compensations.

    Our SOPR is also considering to increase the royalties, which are regulated by the (unofficial) Terms of Service (ToS) with its License Model (LM) of our SOPR.

    The fees for a Joint Venture (JV) partner should be:
    18% to 20% for OAOS
    18% to 20% for BCO GC, CC, SC, DC HW
    11% to 13% for GC, CC, SC, DC HW
    8% to 10% for other HW

    The fees for another licensing partner should be:
    28% to 30% for OAOS
    28% to 30% for BCO GC, CC, SC, DC HW
    11% to 13% for GC, CC, SC, DC HW
    8% to 10% for other HW

    The utilization of the exclusive and mandatory infrastructures of our Society for Ontological Performance and Reproduction (SOPR) and our other Societies remain for all business partners, though it should be more restrictive to better protect the exclusive moral rights respectively Lanham (Trademark) rights of C.S. and our corporation.

    Ontologic Applications and Services (OAOS)
    Bionic, Cybernetic, and Ontologic Computing (BCOC) and Networking
    Grid Computing (GC or GridC) and Networking
    Cluster Computing (CC or ClusterC) and Networking
    Cloud Computing (CC or CloudC) and Networking
    SuperComputing (SC or SupC) and Networking
    Data Center (DC)
    HardWare (HW)

    Companies, that thought to be very clever to protect their core businesses or to establish their new businesses by infringing the rights and properties of C.S. and our corporation, including those companies, which are already blacklisted, have the option to transfer 99% of their shares to our corporation as damage compensations.

    See also the notes

  • % + %, % OAOS, % HW #4 of the 1st of April 2024,
  • Without complete damages no essential facilities of the 13th of June 2024,
  • SOPR will not give up on controls and damages of the 19th of June 2024, and
  • Geopolitical regions should avoid legal interfaces of the 22nd of June 2024.

    By the way:

  • The theft of chicken wings worth 1.5 million U.S. Dollar resulted in 9 years in prison. But in our case we are talking about more than 10 trillion U.S. Dollar in damages. Oh, ... what? :)
  • The governments of the U.S.America, the member states of the European Union (EU), the Republic of India, the P.R.China, and other entities have to remove all plagiarisms and fakes, including scientific documents, which are reproducing materials related to our Evoos and our OS, and have been publicized, implemented, and operated in the last years, specifically to mislead the public about the true origin of our original and unique works of art and to interfere with, and also obstruct, undermine, and harm the exclusive moral rights respectively Lanham (Trademark) rights of C.S..
    And do not produce new plagiarisms, fakes, and other rubbish.
  • We highly recommend that the Chief so and so Officers (CxOs) have a serious talk with their managers and other employees, specifically those working in the Research and Development (R&D) department.
  • What a kindergarten.


    17.August.2024

    10:00 UTC+2
    All out-of-court solutions rejected by bad actors

    The entities of the other side have made clear again and again that they will not cooperate constructively in any way. On the contrary, the last actions were either too little to be worth mentioning, or destructive and dismissive of any solutions, as usual.
    But we observed it all the time and again since the end of last year with their tactical maneuvering and their doggedness, and also their refusal to inform shareholders and investors that they will continue and try to pervert everything regarding the creation and control of the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S..

    Of course, it has been clear to us for a long time that they would insist on legal texts suitable for consultation. But everybody, who has ever sent a legal letter to an industrial company, knows very well that no such legal texts exist and every legal letter will be answered with a standard text containing blah blah blah and denying any wrongdoing.
    In addition, nobody wants to take responsibility.
    But once again, we do not have any reason to lower our guard and be taken for a ride. In fact, we are the last one, who have to do anything in this case. And they do know exactly

  • our legal position and legitimate claims on the one hand and
  • their legal duty to ask for the allowance and licensing, which even works without any legal text and legal assistance since decades and worldwide, because this is the law and hence already the fair compromise, and their illegal actions on the other hand.

    Obviously, the do already have the legal texts suitable for consultation. They even misuse our related publications for increasing the damages. But yet there is no positive movement to be seen. Instead, in the moment we say that we have this and that as well, they claim for the creation of it, implement it, and give it away for free.
    But both, no completely worked out legal texts and no legal wrangling, are conditions for any out-of-court agreement, including the fast track damage compensation option.

    Ultimately, it comes down to the fact that they will continue to act in the same way on the legal level, as we have already seen before on the artistical, technical, societal, and political levels, which means nothing else than exploitation of size and provocation of very resource-intensive legal disputes, including legal dodging and perversion of the true legal situation and the course of justiceat the courts.

    We are neither incapable of acting nor have we reached the limits of our possibilities even not by a long shot. In fact, we have not made any sabotage, layed any fire, and made any shot, and we are not living in the 15th or 18th century in total contrast to the other side.
    See also the notes

  • Illegal FOSS will be removed of the 12th of August 2024,
  • Illegal GC, CC, CEFC, CNx, OM, xGPTx, LLM, CAI, RAG, etc. will be removed of the 14th of August 2024, and
  • % + %, % OAOS, % HW #16 of the 16th of August 2024 (yesterday) (most potentially the last note of the series)

    to get an impression about our next steps.

    19:32 UTC+2
    Comment of the Day

    "True democracy has no reason of state or national interest.", [C.S., Today]

    Reason of state or national interest only means that no democratic discourse exists, or said in other words, it is just a another term for tyranny of state.

    The political doctrines of the

  • late Middle Ages, Renaissance, and 17th century (e.g. Niccolò Machiavelli, Giovanni Botero, and Cardinal Richelieu, and also Jean de Silhon), which were made before the constitutional doctrines of the European rechtsstaat==state of law or legal state (e.g. Immanuel Kant) and the closely related Anglo-American rule of law of the following centuries, and
  • early 20th century (e.g. Friedrich Meinecke)

    never had a place in democracy, because "[i]n the liberal and natural law tradition of thought, the idea of reason of state stands in opposition to the idea of law and the rule of law; reason of state and the rule of law are hostile political concepts.", [Helmut Rumpf: Die Staatsräson im Demokratischen Rechtsstaat==The Reason of State in the Democratic Constitutional State or State Under the Rule of Law. 1980.]
    Exactly and true democrats do act accordingly.
    "In der liberalen und naturrechtlichen Denktradition steht die Idee der Staatsräson im Gegensatz zur Idee des Rechts und des Rechtsstaats, sind Staatsräson und Rechtsstaat feindliche politische Leitbegriffe.", [Helmut Rumpf: Die Staatsräson im Demokratischen Rechtsstaat. 1980.]
    Ganz genau und echte Demokraten handeln dementsprechend.

    Raison d'État is not État légal and does not mean "L'État, c'est moi".

    "In the Federal Republic of Germany, the principle of the rule of law is one of several constitutional principles of the Basic Law. In contrast to the principles of democracy, republic, or welfare state (cf. in so far article 20 of the Basic Law), however, the idea of the rule of law was not directly determined in the Basic Law, but is rather subject to "linguistic openness."[2]”
    Correspondingly, politicians talk about a value-based order, but not a rule-based order.
    Eventually, it is just only an authoritarian state based on the blah blah blah of anti-social, incompetent, and arrogant [net zero cursing] self-promoters.
    "In der Bundesrepublik Deutschland ist das Rechtsstaatsprinzip eines von mehreren Verfassungsprinzipien des Grundgesetzes. Im Gegensatz zum Demokratie-, Republik- oder Sozialstaatsprinzip (vgl. insoweit Art. 20 GG) fand der Gedanke der Rechtsstaatlichkeit im Grundgesetz allerdings keinen unmittelbar determinierten Niederschlag, unterliegt vielmehr einer "sprachlichen Offenheit".[2]"
    Dementsprechend sprechen Politiker von einer wertebasierten Ordnung, aber nicht von einer regelbasierten Ordnung.
    Letztendlich ist es also doch nur ein Obrigkeitsstaat, der auf dem Blahblahblah von assozialen, inkompetenten und arroganten [Netto-Null-Fluchen] Selbstdarstellern basiert.

    19:52 UTC+2
    SOPR cancels out-of-court agreements

    Our Society for Ontological Performance and Reproduction (SOPR) concluded that the existential threat, which is posed by shareholders filing lawsuits, can only be resolved through

  • demanding the payment of damage compensations,
  • transfer of all illegal materials,
  • orderly insolvency,
  • takeover of the bankrupt's assets as damage compensations,
  • restructuring concept, including an initial debt relief and the extension of loans to reduce the current liabilities, followed by the reduction of the share capital or capital stock to 0 U.S. Dollar, Euro, etc. with the result that the current shareholders will leave the company without compensation and the stock corporation will lose its stock market listing,
  • restart under our rightful control and command, and
  • going public as our subsidiary of our corporation.

    And all other indicators also show that an insolvency is unavoidable.

    They also refuse to pay our damage compensations, to reconstitute our rights, and to give back our properties and only want to pay lower royalties, if at all, or otherwise they would have signed already.

    Only totally incompetent actors are still participating in illegal Free and Open Source Software (FOSS) projects and insisting on legal texts suitable for consultation.


    21.August.2024

    12:24 UTC+2
    Times Magazine already blacklisted

    12:24 and 25:02 UTC+2
    Condé Nast has to comply with ToS

    Terms of Service (ToS)

    The Terms of Service (ToS) of our Society for Ontological Performance and Reproduction (SOPR) demands the

  • payment of royalties as set by the License Model (LM),
  • utilization of the exclusive infrastructures of our SOPR and our other Societies with their set of foundational and essential
    • facilities,
    • technologies,
    • goods, and
    • services, including the
      • Marketplace for Everything (MoF) for raw signals and data, knowledge bases, belief bases, models, and algorithms,
      • etc.
  • unrestricted access to raw signals and data generated in the legal scope of ... the Ontoverse (Ov),
  • and so on.

    A media company, that refuses to comply with the

  • national and international laws, regulations, and acts, as well as agreements, conventions, and charters,
  • rights and properties of C.S. and our corporation, and
  • Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) Terms of Service (ToS) with the License Model (LM) of our Society for Ontological Performance and Reproduction (SOPR),

    will

  • enjoy only the very basic essential facilities, if and only if the exclusive provision of this essential facilities does not interfere with, and also obstruct, undermine, and harm the exclusive moral rights respectively Lanham (Trademark) rights of C.S., which is the case in relation to our transformative, generative, and creative Bionics for content creation (see also the referenced publications of us below),
  • be forced by the courts to comply with, or
  • be blacklisted by our SOPR.

    And once again: No,we do not steal the original and unique works of other creators.

    And our

  • coherent Ontologic Model (OM) (e.g. Foundational Model (FM), AIModel, MLModel, ANNModel (e.g. Foundation Model (FM), ANNLanguageModel (e.g. LargeLM)), Cognitive Model (CM), etc.),
  • transformative, generative, and creative Bionics, also wrongly and illegally called generative Artificial Intelligence (AI),
  • Ontologic roBot (OntoBot), and
  • Ontologic Search (OntoSearch) and Ontologic Find (OntoFind),

    which have been created as parts of our Evolutionary operating system (Evoos) and our Ontologic System (OS), are not built using "stolen goods" or trained on "stolen goods", because C.S. is entiteld by the copyright law to transform said works of art respectively copyright protected goods of others, when creating a new, original and unique, personal expression of idea, which is definitely and doubtlessly the case with our Evoos and our OS.

    See also the note The big AI bluff is busted, too of today below.

    By the way:

  • Condé Nast already lost against Lynn Goldsmith. In its very own interest, the media company should avoid any legal discourse at the courts with us.

    12:24 UTC+2
    Associated Press has to comply with ToS

    Terms of Service (ToS)

    See the note Condé Nast has to comply with ToS of today.

    12:40 UTC+2
    Financial Times has to comply with ToS

    Terms of Service (ToS)

    See the note Associated Press has to comply with ToS of today.

    13:11 UTC+2
    The big AI bluff has been busted, too

    Over the last years, we got the impression from time to time that the activities around the fields of deep learning, virtual assistant, chatbot, and so on look like the marketing actions and magic shows in the fields of autonomous vehicles, flying cars, and nuclear fusion.
    And indeed, as longer we are investigating the activities in the field of cybernetics, bionics, robotics, and ontonics as more and more we get evidences that other are only selling old wine in new skins (e.g. word embedding (e.g. word2vec), Dialog System (DS or DiaS), conversational agent System (CAS or ConAS)) and our new wine in old skins (e.g. xGPT, Retrieval Augmented Generation (RAG)) to mislead the public about the true origin of the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S. and other entities, and get the control over them.

    Now, let us wait when one of our supersmart kleptomaniacs presents a cognitive agent or what they wrongly call chatbot.

    See also the Feature-List #1 22nd of April 2008:

  • Integration of functions from the web browsers and email, news, and chat clients.

    See also the :

  • World wide first-of-its-kind operating system, that sets and establishes the new standards Web 3.0, Web 4.0, and Web 5.0 (every user gets an id, that starts with onto#)
  • Personal World Wide Web-based infrastructure, network, and virtual drive, that supports the user centric migration 'Max-Mig', synchronization 'Max-Sync' of applications and data, and communication 'Max-Com'

    The first list-point has to be viewed in relation to our Evoos based on bionics (e.g. AI, ML, CI, ANN, etc.) and the OS Architecture (OSA), which integrates all in one.
    The second list-point is a slightly encrypted description of Cloud Computing of the second generation (CC 2.0).

    See also the section Collaborative Virtual Environment of the Links to Software webpage of the website of OntoLinux:

  • Yag2002 project: VRC - Virtual Reality Chat.

    See also the related messages, notes, explanations, clarifications, investigations, and claims, specifically

  • Clarification of the 27th of April 2016,
  • Clarification of the 29th of April 2016,
  • Comment of the 5th of May 2016,
  • OS is ON of the 9th of May 2016 (so, so, Azure Sphere),
  • Ontologic Web Further steps of the 9th of December 2016, and
  • Clarification 15th of April 2017, and also
  • Clarification of the 3rd of March 2024,
  • Clarification of the 14th of April 2024,
  • Clarification Cloud 3.0 'R' Us #4 of the 8th of May 2024,
  • Clarification of the 29th of May 2024,
  • Clarification of the 2nd of August 2024, and
  • Clarification of the 11th of August 2024,
  • Comment of the Day of the 11th of August 2024,

    and the other publications cited therein.
    Honestly, we are wondering why we have explained the details of the fraud in the last years at all, because the whole case is so obvious.
    It should be obvious once again even for a person, who is not a Person of Ordinary Skill In The ART (POSITA), also known as laywoman or layman, that what is wrongly and illegally called Foundation Models (FM), Large Language Models (LLMs), Generative Artificial Intelligence (GenAI), Conversational Artificial Intelligence (CAI), and so on, and implemented respectively performed and reproduced by other entities than C.S. and our corporation constitute copyright infringements.


    22.August.2024

    14:58 UTC+2
    Further steps

    While finishing the

  • Clarification of the 14th of April 2024
    Simulacrum, Bioholonics, Society of Mind,
  • Clarification Cloud 3.0 'R' Us #4 of the 8th of May 2024
    Distributed operating system (Dos), Linux, BlackBoard System (BBS), Event-Driven Architecture (EDA), Space-Based Architecture (SBA), container engines, runtimes, Cloud-native technologies (Cnx), etc.,
  • Clarification of the 29th of May 2024
    integrated Bionic systems, integrated subsymbolic and symbolic systems,
  • Clarification of the 2nd of August 2024
    Dialog System (DS or DiaS), Dialogue Manager (DM or DiaM), Conversational System (CS or ConS), and
  • Clarification of the 11th of August 2024
    word embedding, Information Retrieval (IR) System (IRS), Dialogue Manager (DM or DiaM), Retrieval Augmented Generation (RAG), and

    we are also cleaning up this website of OntomaX, such as for example the

  • description of damage compensations, and facilities, technologies, goods, and services,
  • definition of Ontoverse (Ov) and New Reality (NR), and as a Service technologies (aaSx), and
  • properties of our orginal and unique sui generis masterpiece series, including the works of art also titled Evolutionary operating system, also abbreviated as Evoos, and Ontologic System, also abbreviated as OS.

    Simultaneously, we are completing, detailing, refining, and clarifying our collection of contents and draft for the set of legal documents.

    15:13 and 17:27 UTC+2
    Google is monopoly, share ratio like Microsoft

    The U.S.American Department of Justice (DoJ) together with other entities has determined that the subsidiary Google of the company Alphabet has established a monopoly by paying for the exclusive placement of its online search engine and conducting other fraudulent actions.
    But these prohibited activities also show that Google's monopoly is directly connected with, and supported and protected by the illegal performance and reproduction of our original and unique Ontologic System (OS) (including our Evolutionary operating system (Evoos)), which was implemented as the partial OS variants Android and iOS, and other foundational and essential parts of our OS (e.g integration of a virtual assistant with Voice User Interface (VUI), also called as voice assistant by other entities, Graph-Based Knowledge Base (GBKB) or Knowledge Graph (KG), and a Artificial Neural Network (ANN) Language Model (LM) (ANNLM) (e.g. Large Language Model (LLM))).

    While the ecompany Microsoft approached our OS at first from the fields of Distributed operating system (Dos) and Service-Oriented technologies (SOx) and then also from the fields of chatbot, Dialog System (DS or DiaS), Cognitive Agent System (CAS), and Information Retrieval (IR) System (IRS) (e.g. Search Engine (SE)), Alphabet (Google) approached our OS from all fields.
    Eventually, Alphabet (Google) is in the same situation as the company Microsoft and the cumulative ratio of the shares of a proposed joint venture (percentage of company shares plus percentage of revenue of the joint venture partner) would be 85% = 68% + 17%, 90%, or even a larger total percentage value in case of an out-of-court or court-ordered agreement.
    Otherwise respectively without any kind of agreement, we do ask for royalties of 27% of the revenue generated with Ontologic Applications and Ontologic Services (OAOS) and demand for the minimalst utilization of our essential facilities without interfering with, and also obstructing, undermining, and harming the exclusive moral rights respectively Lanham (Trademark) rights of C.S., which in practice means no Distributed System (DS), no Cloud Computing, no Serverless Computing, no Function as a Service (FaaS), no Cloud-native, no Ontologic Model (OM), no Foundational Model, no Artificial Neural Network (ANN) Language Model (ANNLM), no multimodal transformative, generative, and creative Bionics, no Ontologic Programming, no OntoBot, no OntoScope, no OntoSearch, no OntoFind, and no OntoThis and OntoThat. :)

    We would also like to recall that the damage compensations have to include an element to restore or compensate for the destroyed momenta of at least the last 20 years, which is not negotiable.


    26.August.2024

    22:52 UTC+2
    Waabi blacklisted

    The obvious infringements of the exclusive moral rights respectively Lanham (Trademark) rights, and the copyrights of C.S. and our corporation, and the conspiracies with the companies Nvidia and Uber by the company Waabi should already be sufficient to break necks.

    Entities have to

  • reference the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S.,
  • ask for the allowance and licensing for the performance and reproduction of certain parts of sais AWs and IPs, and also the utilization of essential facilities provided exclusively by our Society for Ontological Performance and Reproduction (SOPR),
  • avoid the interference with, and also obstruction, undermining, and harm of the exclusive exploitation rights (e.g. commercialization rights (e.g. monetization rights)), including the
    • licensing of said AWs and IPs as well as
    • selling shares of our businesses

    to third entities.

    All those fraudulent and even serious criminal start-ups are insolvent just right from the start, but even start their business well knowing all these legal facts.
    That is not anymore incorporation of an enterprise, but creation of an insolvency as business goal and strategy.

    By the way:

  • For sure, we will add this activity to the already very long list of those companies, which have partnered with Waabi.
  • In the legal scope of ... the Ontoverse (Ov) the Nvidia CUDA (originally Compute Unified Device Architecture) parallel computing platform, specifically its utilization for the field of Artificial Neural Network (ANN), Drive OS, and Drive SDK, and also comparable partial plagiarisms of our Ontologic System (OS) belong to a considerable extent to the exclusive and mandatory infrastructures of our SOPR and our other Societies with their set of fundamental and essential facilities, technologies, goods (e.g. robot, vehicle), and services.


    27.August.2024

    00:00 UTC+2
    Inability to act and stalemate support all claims

    We always anticipated before the publication of our Ontologic System (OS) in the end of October 2006 that the very well known companies will steal our original and unique works of art created by C.S. and exclusively exploited by our corporation, and we even found out that they were doing so since the mid of the 1990s. Guess why we had a Dot.com bubble.

    On the 25th of September 2023 we already concluded a note as follows:
    "It has also become obvious, that companies [...] want to establish a level playing field and level off that playing field. But we already discussed the matter and every interested entity does already know that we do not need to compete for the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S. at all, but are allowed to exploit them exclusively worldwide.
    For sure, we do know that companies [...] want to provoke and embroil us in very expensive, time-consuming, bogus lawsuits at the courts and drawn-out disputes. But we already explained that a legal process will not go through all instance at all, because all relevant leading decisions or landmark rulings have been made. If at all it will go to one instance higher and the next higher instance will directly reject any appeal.
    Do not be fooled by those manoeuvres, which should only confuse the public. Eventually, companies [...] will have to accept and respect the moral rights, the copyrights, and all other rights and properties of C.S. and our corporation. No other way exists in a rule-based law and order environment. They have to sign, pay, comply. :)"

    Therefore, we have begun to discuss about general legal strategies with our legal team in the year 2013.
    Eventually, we chose a two-pronged double-edged legal strategy, consisting of actions related to the

  • private law (e.g. moral rights respectively. trademark rights, copyrights, etc.) and
  • criminal law (e.g. conspiracy, wire fraud, etc.).

    One cornerstone of our private legal strategy is a preliminary injunction by court order, specifically to accelerate the processes of raising public awareness and reaching an out-of-court or court-ordered agreement.
    Luckily, the constitutional complaint of the company Amazon in relation to such a preliminary injunction by a lower court has been rejected by the highest court in the F.R.Germany in the year 2022.

    Another cornerstone of our federal legal strategy is a judicial determination of illegal acts, which serve to enforce our claims for damages under private law.

    Another part of our strategy under consideration is to claim for very low damages at first, but later use a court judgment as leverage to claim for the true very high damages.
    Or said in other words, we only need a judicial determination of an illegal act, which is sufficient to enforce the exclusive rights of C.S. and our corporation, which again means the prohibition of the performance and reproduction of any part of a protected creation.
    Howsoever, such details need careful consideration.

    Howsoever, this still leaves a lot of room for the concrete determination of legal actions and the related details.

    {better wording} So far, 2 observations could already be made:

  • The inability to continue with
    • damaging and destroying the reputation, identity, and integrity of C.S. and our corporation,
    • hiding the huge amount of evidences, specifically the undeniable similarities and matches between the original and unique works of art created by C.S. and the illegal performances and reproductions or plagiarisms fabricated by bad actors,
    • misleading the public about the true origin of our original and unique works of art,
    • getting the complete control respectively the personal and private exclusive rights and properties of C.S. and our corporation, and
    • preventing legal action, prosecution, and punishment,

    and the resulting stalemate already support all of our claims, because this deadlock can only be resolved by the fullfilment of these claims as part of the only agreement, which can and will be reached with or without courts.

  • The very cautious (re)actions already made by the U.S.American companies in Europe since 1 year, despite they
    • do not completely understand what the first observation means for them and
    • cannot improve their legal situation in the U.S.America or in other countries,

    because we are talking about the

    • national and international laws, regulations, and acts, as well as agreements, conventions, and charters, specifically the copyright law,
    • rights and properties of C.S. and our corporation, and
    • Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) Terms of Service (ToS) with the License Model (LM) of our Society for Ontological Performance and Reproduction (SOPR).

    See also the notes

  • % + %, % OAOS, % HW #4 of the 1st of April 2024,
  • Without complete damages no essential facilities of the 13th of June 2024,
  • SOPR will not give up on controls and damages of the 19th of June 2024,
  • Geopolitical regions should avoid legal interfaces of the 22nd of June 2024,
  • All out-of-court solutions rejected by bad actors of the 17th of August 2024,
  • SOPR cancels out-of-court agreements of the 17th of August 2024, and
  • Google is monopoly, share ratio like Microsoft of the 22nd of August 2024.

    00:36 UTC+2
    FYI blacklisted

    That thing is now in the legal scope of ... the Ontoverse (Ov).

    By the way:

  • One always meets twice in life. In this case it is already the third time.
  • If Ontologic Applications and Ontologic Services (OAOS) are free to use by endusers does not mean that our Society for Ontological Performance and Reproduction (SOPR) does not demand and enforce the payment of royalties in accordance with the Terms of Service (ToS) and License Model (LM) from Ontologic Applications and Ontologic Services Providers (OAOSPs) in relation to said OAOS.

    18:20 UTC+2
    Comment of the Day

    "In 2000, crypto meant security. In 2015, crypto meant anarchy.", [C.S., Today]

    Crypto always means security, but since a decade, crypto also means anarchy. And anarchy gives no security and no democracy.


    29.August.2024

    01:38 and 08:32 UTC+2
    Super Micro in deep trouble, Nvidia will follow

    Super Micro has manipulated its company balance sheet once again.
    We also note that such frauds were also the start of the end of the Dot.com bubble.

    We quote a webpage, which is about the stocks of the company Nvidia: "NVDA Stock Quote Price and Forecast
    NVIDIA Corp. engages in the design and manufacture of computer graphics processors, chipsets, and related multimedia software. It operates through the following segments: Graphics Processing Unit (GPU) and Compute & Networking. The Graphics segment includes [...], the GeForce NOW game streaming service and related infrastructure, [...], virtual GPU, or vGPU, software for cloud-based visual and virtual computing, automotive platforms for infotainment systems, and Omniverse Enterprise software for building and operating metaverse and 3D internet applications. The Compute & Networking segment consists of Data Center accelerated computing platforms and end-to-end networking platforms [...], NVIDIA DRIVE automated-driving platform and automotive development agreements, Jetson robotics and other embedded platforms, NVIDIA AI Enterprise and other software, and DGX Cloud software and services. [...]"

    {better wording} Comment
    At least the parts of the 2 segments in the quote above and related with what is wrongly and illegally called metaverse and cloud are based on our original and unique Ontologic System (OS) created by C.S.. This also comprises most if not all of its technologies, goods, and services for data centers.
    In this way, Nvidia infringes the rights and properties (e.g. copyright) of C.S. and our corporation.

    In addition, most if not all parts of the 2 segments in the quote above did not belong to the core business of Nvidia before the publication of our Ontologic System (OS) in the end of October 2006, but are more or less identical with the related parts of the segments of our corporation, which exclusively exploits our OS.
    Furthermore, Nvidia is misleading the interested public about the true origin of the original and unique works of art created by C.S.
    In this way, Nvidia interferes with, and also obstructs, undermines, and harms at least 2 of the exclusive moral rights respectively Lanham (Trademark) rights of C.S..

    Moreover, Nivida's refusal to properly reference our copyright protected sui generis works of art ... and support of third entities to also infringe the rights and properties of C.S. and our corporation.
    In this way, Nvidia abuses its market power, organizes and participates in conspiracies, and conducts wire fraud, investment fraud, and several other serious criminal activities.

    The same also holds for other companies, such as for example Salesforce, SAP, HP, and Co..

    See also the notes

  • SOPR considering no openess again of the 24th of February 2024 (keyword Nvidia) and
  • % + %, % OAOS, % HW #11 of the 24th of May 2024.

    03:34 and 22:41 UTC+2
    WhatsoeverGPT bluff and hype is over

    It was always only about copyright infringement and conspiracy. And if our damage compensations will not already break their necks, then the prosecution will do so finally.

    And we also do know what the big ones have invested and have announced as investments, and what the small ones have invested and have available as investments, which is a fraction on the one hand and raises the question why the big ones needed the small ones at all on the other hand.
    And both, the big ones and the small ones, do need our businesses, and the big ones will protect their core businesses against the small ones, which also raises the question why the big ones needed the small ones at all.
    Therefore, even OpenAI and the other last small ones still standing are over, because they were always a lost cause from the legal and technological points of view, and are now being outgunned by the big ones.
    We do know and have explained the related reasons and activities since more than a decade on this website of OntomaX: They are merely proxies of the industry or the government to infringe the rights, steal the properties, damage the reputations, harm the integrities, and frustrate the momenta of C.S. and our corporation.
    We also documented it already in the cases of the parts of our Ontologic System (OS), which are wrongly called

  • Cloud, Edge, and Fog Computing (CEFC) and Cloud-native, and
  • smartphone, smarttablet, smartwatch, smartcar, smarthome, etc..

    Eventually, we do have the goals, strategies, tricks, and activities, which show a simply to comprehend pattern of fraud and even serious crime, and a lot of evidences.

    Correspondingly, we can only highly recommend to not invest in those small ones anymore, specifically to the big ones, that depend on the orignal and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S. and our goodwill and how well it works with the payment of our damage compensations, royalties, and other legal matters, because in case of the big ones this would be understood as abuse of market power and blackmailing among many other fraudulent and even serious criminal activities.
    Furthermore, it would show once again that on the one hand they refuse to contact and collaborate with us and on the other hand we were absolutely right with having the opinion that no agreement can be reached without courts and hence with preparing lawsuits in parallel to proposing out-of-court agreements.

    We also recall once again that they do not need any illegal alternatives at all, because they already get access to the exclusive and mandatory infrastructures of our Society for Ontological Performance and Reproduction (SOPR) and our other Societies with their set of foundational and essential facilities, technologies, goods, and services as regulated by the

  • national and international laws, regulations, and acts, as well as agreements, conventions, and charters,
  • rights and properties of C.S. and our corporation, and
  • Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) Terms of Service (ToS) with the License Model (LM) of our Society for Ontological Performance and Reproduction (SOPR)

    to soften dependencies and remove restrictions to a certain extent for providing and guaranteeing

  • freedom of choice, which is truly free,
  • innovation, which is truly inventive, and
  • competition respectively already existing core businesses

    pro bono publico==for the benefit of the public without interfering with, and also obstructing, undermining, and harming the exclusive moral rights respectively Lanham (Trademark) rights (e.g. exclusive rights for exploitation (e.g. commercialization (e.g. monetization))) of C.S..

    As also regulated by the same legal provisions, specifically the exclusive moral rights respectively Lanham (Trademark) rights and also the copyrights, including the exclusive rights for

  • naming and labelling,
    • referencing respectively citation with attribution, and
    • designation,
  • presentation,
  • modification, and
  • exploitation (e.g. commercialization (e.g. monetization)),

    and also

  • performance, and
  • reproduction,

    of C.S. and our corporation, proper naming and labelling is required as well.

  • Other labels for our works of art prohibited of the 8th of June 2024, and
  • SOPR added clause for end of no labelling of the 11th of June 2024.

    See also the related messages, notes, explanations, clarifications, investigations, and claims, specifically

  • The big AI bluff has been busted, too of the 21st of August 2024,
  • Further steps of the 22nd of August 2024,
  • Inability to act and stalemate support all claims of the 27th of August 2024,

    and the other publications cited therein.


    30.August.2024

    16:40 and 18:16 UTC+2
    EU AI Act and Cal AI Act make LLMs illegal, too

    European Union (EU)
    California (Cal)
    Artificial Intelligence (AI)

    None of those illegal so-called Large Language Models (LLMs) respectively partial plagiarisms and fakes of our original and unique, copyright protected

  • Evolutionary operating system (Evoos) with its foundation of the so-called stochastic parrot and one-trick pony, and
  • Ontologic System (OS) with its Ontologic roBot (OntoBot)

    can be tested before publication. And one can already observe that they also have no means of a kind of "emergency stop button".
    Both mandatory legal requirements are just not included by design of those illegal LLMs and technologies, applications, and services based on them in contrast to the deficits inherent in their foundations, such as for example the so-called hallucination.

    Eventually, only our Ontologic System (OS) with its

  • pure rationality,
  • Friendly Artificial Intelligence (Friendly AI),
  • basic properties of (mostly) being verified and validated, and
  • many other original and unqiue expressions of idea

    integrated by our Ontologic System Architecture (OSA) also remains as the only artistical, technological, and legal creation and solution from this specific legal point of view of these AI Acts besides the moral right respectively Lanham (Trademark) right, the copyright, and other rights of C.S. and our corporation.

    For sure, we also do love that these AI Acts regulate all illegal LLMs, including those illegal Free and Open Source Software (FOSS), which will end that mess also from this other direction.

    To Keep It Simple and Stupid and also Crystal Clear (KISSCC), any illegal LLM is proven illegal another time, if it satisfies a mandatory testing as required by such an AI Act, because a Bionic approach (e.g. Machine Learning (ML), Artificial Neural Network (ANN), etc.) being successfully tested respectively validated and verified is one of the original and unique expressions of idea created by C.S. and any performance and reproduction, including any marketing and implementation, interferes with, and also obstructs, undermines, and harms the exclusive moral rights respectively Lanham (Trademark) rights (e.g. exploitation (e.g. commercialization (e.g. monetization))) of C.S..
    Neither our masterpieces nor our momenta are for free for anybody.
    Our absolutely Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) terms and conditions will be fulfilled and this begins with the payment of damage compensations, which are the higher of the apportioned compensation, profit, and value, or 2 or more of them retroactively for 20 years respectively from the due data 1st of January 2007 the higher of royalties unpaid, profits generated, and business values increased retroactively for 20 years.

    We already wish all fraudulent and even serious criminal entities to have a lot of fun when trying to remove all that illegal FOSS. Our SOPR will not hesitate to act legally. :)

    By the way:

  • "I have been campaigning for the regulation of AI for over 20 years. Just as we regulate any technology that poses a potential risk to the public.", [Elon Musk]
    But he also publicized an illegal chatbot based on an illegal Large Language Model (LLM), or being ,opre precise, an illegal plagiarism of an essential part of our Evolutionary operating system (Evoos) with its coherent Ontologic Model (OM) and our Ontologic System (OS) with its Ontologic roBot (OntoBot).
    And that Advanced Driver-Assistance System (ADAS) or partial vehicle automation, self-driving with semi-autonomous navigation (classified as SAE International Level 2 automation) and fully autonomous driving (classified as SAE Level 5) of vehicles of Tesla Motors are also based on the same foundation.
    Therefore, we demand over 1 decade to regulate any person, that poses a potential risk to the public. Is not it?
  •    
     
    © or ® or both
    Christian Stroetmann GmbH
    Disclaimer