Home → News 2024 May
 
 
News 2024 May
   
 

01.May.2024

11:14 and 15:33 UTC+2
SOPR considering action due to ELVIS Act

The Tennessee House of Representatives of the U.S.American state Tennesse passed the Ensuring Likeness Voice and Image Security (ELVIS) Act, which "is an amendment to a 1984 law that was the result of the Elvis Presley estate litigation for controlling how his likeness could be used after death".

We would not call it a new act, but merely a clarification and more detailed formulation of legal matters. In fact, the ELVIS Act protects, and now read very carefully, merely the utilization of a recording of the voice of a public figure, including an artist, with a generative software, or being more precise, with our Ontologic roBot (OntoBot) and our transformative, generative, and creative Bionics.
We also note that the lawmakers clearly said that the ELVIS Act does not apply to a cover band and therefore imply that it also does not apply to the act of sampling, the production of sound (e.g. tone, voice, etc.), and other acts protected by the constitution or the basic right, and the copyright law.

For sure, such an act changes the rule-based law and order environment to which our Society for Ontological Performance and Reproduction (SOPR) has to comply with, but actually we are not sure if the exclusive rights and properties of C.S. and our corporation are violated in this way, because C.S. does have

  • very broad individual rights and artistic freedoms, including mimicking of human voices,
  • complete protections of the own creative integrity, and
  • exclusive moral rights (e.g. exploitation (e.g. commercialization (e.g. monetization))).

    Furthermore, the ELVIS Act addresses only original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S., which is not tolerable and lawmakers have always been very clearly warned about this legal situation.

    The latter also suggests that our SOPR might have to

  • classify the recording industry as a member of the Information and Communication Technology (ICT) licensee class and ask for a relative share of the revenue of 27% according to the License Model (LM) of our SOPR,
  • prohibit the recording industry the utilization of our transformative, generative, and creative Bionics for the production on the basis of the exclusive rights for their performance and reproduction,
  • increase the royalties for the state Tennessee,
  • demand the repeal of that amendment of the ELVIS Act or the whole Elvis nonsense altogether in Tennessee due to its unlawful and unconstitutional restriction of the rights and properties of C.S. and our corporation,
  • demand licensees of our SOPR in written form to refrain from any legal action against C.S. and our corporation on the basis of the ELVIS Act and other comparable unlawful and unconstitutional acts, or
  • act in other legal, technical, and commercial ways.

    We also quote a report about the ELVIS Act and a related discussion in the U.K.: "[...]
    "In the past year, I have developed my own deepfake version of myself that is not only trained in my personality but also can use my exact tone of voice to speak many languages," [a female artist] said.
    [...]"

    Comment
    We have also seen such deepfake avatar, rudimentary digital twin, and very simple cybernetic self-reflection, self-image, or self-portrait creations realized by other persons.
    But none of these creators got the authorization from our SOPR with the consent and on behalf of C.S. for the

  • performance and reproduction of our integrated multidimensional, multidomain, multilingual, multiparadigmatic, multimodal, multimedia transformative, generative, and creative OntoBot as part of their deepfakes avatars and
  • utilization of illegal platforms for the realization of their creations.

    Is not it?
    And using such a deepfake avatar in the legal scope of ... the Ontoverse (Ov) is considered a type of Ontologic Applications and Ontologic Services (OAOS).

    And as we crystal clearly explained before (see the note of the ) a tone of voice is not copyrightable, because it would give that female artist a monopoly for it. She has to take recordings of her voice at least.

    And with radical attitude, vigilante justice, capricious action, and cheap lobbyism and politics none of them wins anything. :)
    No, it is not working that way.

    See the related notes publicized since some months on this website of OntomaX, specifically

  • New York Times completely wrong regarding copyright of the 27th of December 2023,
  • Do not confuse creative and commercial aspects of the 7th of April 2024,
  • It is like 2Pac's voice, but Drake's cadence of the 27th of April 2024,

    and the other publications cited therein.

    Oh, what a pity. Our proposed deal or compromise would have been much better and less expensive.

    It is always better to collaborate with us. :)

    By the way

  • We are wondering once again why suddenly is such an activity instead of doing more for data protection or privacy, and data security, protection against lying press and social media threads, and so on.


    03.May.2024

    08:21 and 10:01 UTC+2
    Users use many sources of information

    For sure, Google is different to Amazon, Ebay, Facebook, TikTok, and so on. But the main point is that users use several sources and different sources of information for decision making.
    Indeed, a user respectively the average person would not agree to the argument that these companies could be considered rivals in search or even "the same thing", but they are rivals in search for decision making at least when it comes to the purchase of goods.

    But more importantly from our point of view are the facts that we have

  • revolutionized the way users are looking for information with our
    • Ontologic roBot (OntoBot), based on the fields of Graph-Based Knowledge Base (GBKB) or Knowledge Graph (KG), Conversational System (CS or ConS), including Conversational Agent System (CAS or ConAS), chatbot, and our coherent Ontologic Model (OM) (e.g. Foundation Model (FM), Foundational Model (FM), Capability and Operational Model (COM), Large Language Model (LLM), etc.), and
    • Ontologic Search (OntoSearch) and Ontologic Find (OntoFind) based on our OntoBot.

    (see also the note KG, SE, IAS, generative AI, etc. 'R' Us of the 21st of August 2023) and

  • decided to make them the fabrics, subsystems and platforms of the exclusive and mandatory infrastructures of our Society for Ontological Performance and Reproduction (SOPR) and our other Societies (see also the issue SOPR #327 of the 7th of June 2021).

    This also shows that we do not need billions to build a competitive search engine and billions more to pay Apple, Samsung, and Co., but just a little time at the courts or other locations outside of them to enforce the exclusive moral rights of C.S..

    What is advantageous with U.S. et al. v. Google and similar antitrust legal cases is that we cannot only use them as blueprints, bases, and more precedance cases, but also only need to add our individual facts and additional insights seamlessly, which reduces the processing time to weeks from years.

    But eventually, it is always better to collaborate with us.

    14:04 UTC+2
    We expect that Apple signs, pays, and complies

    We expect that the company Apple complies with the

  • national and international laws, regulations, and acts, as well as agreements, conventions, and charters,
  • rights and properties of C.S. and our corporation, and
  • Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) Terms of Service (ToS) with the License Model (LM) and Main Contract Model (MCM) of our Society for Ontological Performance and Reproduction (SOPR),

    which already are the compromise.

    The Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) Terms of Service (ToS) with the License Model (LM) include regulations for the

  • reconstitution, restoration, and restitution, and also transition processes,
  • payment of damage compensations, which are the higher of the apportioned
    • triple damage compensations induced, resulting from
      • unpaid royalties for unauthorized performances and reproductions,
      • obmitted referencing respectively citation with attribution, and
      • thwarted, obstructed, blocked, and otherwise missed commercial business possibilities and follow-up opportunities,
    • profit generated illegally, or
    • value (e.g. share price, market capitalization) increased or gained illegally

    by

    • performing and reproducing our Evolutionary operating system (Evoos) and our Ontologic System (OS) in whole or in part without authorization respectively allowance and license, and
    • interfering with, and also obstructing, undermining, and harming the exclusive moral rights respectively Lanham (Trademark) rights of C.S and our corporation,
  • payment of admission fees,
  • payment of outstanding royalties,
  • written admission of guilt,
  • written confirmation of exclusive
    • rights, including
      • moral rights respectively Lanham (Trademark) rights

      and

    • properties, including
      • copyrights,
      • raw signals and data,
      • digital and virtual assets, and
      • online advertisement estate

      of C.S. and our corporation,

  • transfer of all illegal materials,
  • establishment of new companies as joint ventures respectively execution of company takeover or merger, also known as the golden power regulation,
  • utilization of the exclusive and mandatory infrastructures of our Society for Ontological Performance and Reproduction (SOPR) with their set of fundamental and essential facilities, technologies, goods, and services, including the cloud, Ontologic Model (OM), Capability and Operational Model (COM), MultiModal (MM), Foundation Model (FM) and Foundational Model (FM), Large Language Model (LLM) as COM, Global Brain (GB), etc., etc., etc., if and only if no interference with, and also obstruction, undermining, and harm of the exclusive rights and properties of C.S. and our corporation takes place in this way as required by the laws effective, and
  • payment of running royalties for Ontologic Applications and Ontologic Services (OAOS), including access and utilization of certain parts of our Ontologic System Components (OSC), Ontoverse Components (OvC), Ontoscope Components (Os), etc..

    Specifically, we demand the payment of damage compensations since the

  • sale of the Ontoscope (Os) variants iPhone, iPad, Apple Watch with iOS, etc., and Siri, and also devices with M# processors, and other goods based on our Ontologic System (OS), and
  • provision of Ontologic Applications and Ontologic Services (OAOS),

    or a comparable transaction, like the one discussed in the note % + %, % OAOS, % HW of the 28th of February 2024.

    We also would like to recall that there will be

  • no Siri 2.0, like there is no Alexa 2.0, Cortana 2.0, Bixby 2.0, etc., even not under new labels,
  • no Metaverse (Mv),
  • no Cloud-native technologies (Cnx), and
  • no this and that.

    Finally, we note that if Apple takes Microsoft and OpenAI, or Alphabet (Google), then its decision makes no difference to us, because it always leads to the same result, which is our OntoBot.


    07.May.2024

    20:27 UTC+2
    LangChain blacklisted

    The framework for Large Language Model application development and integration is based on our Evoos and our OS.

    20:31 UTC+2
    Vertex AI Agent Builder includes MAS and HAS

    We quote a report, which is about the Google Cloud Vertex AI Agent Builder and was publicized on the 17th of April 2024: "[...]
    [...] For complex goals, developers can stitch together multiple agents, with one agent functioning as the main agent and others as subagents. Agents can call functions or connect to applications to perform tasks for the user.
    Vertex AI Agent Builder can improve accuracy and user experience by grounding model outputs in enterprise data, using vector search to build custom embeddings-based [Retrieval Augmented Generation (]RAG[)] systems. Developers also have the option of grounding model outputs in Google Search, combining the power of Google's latest foundation models with access to fresh information to improve the timeliness of results.
    [...] Google is making Gemini 1.5 Pro available in a public preview [...] via the Gemini API in Google AI Studio.
    [...]"

    Comment
    We call it Multi-Agent System (MAS), including Holonic Agent System (HAS).

    And about our integration of Knowledge Graph (KG), search engine, and Large Language Model (LLM) with our coherent Ontologic Model (OM), Ontologic Programming (OP), and Ontologic Computing (OC) (e.g. transformative, generative, and creative Bionics) paradigms of our Ontologic roBot (OntoBot), OntoBlender and OntoBuilder, Ontologic Search (OntoSearch) and Ontologic Find (OntoFind), and so on we do not even need to discuss anymore.
    See also the notes

  • and
  • Users use many sources of information of the 3rd of May 2024.

    21:03 UTC+2
    Silo AI blacklisted

    The legal situation of the company Silo AI is the same as with the others.
    At least, they know where the white, yellow, or red line is.


    08.May.2024

    11:37 UTC+2
    % + %, % OAOS, % HW #7

    We have to clarify that the ratio x% + y% refers to all company shares, including share capital or capital stocks, including voting and prefered shares, or common and preferred stocks, and that we only explicitly mentioned the voting shares to emphasize the aspect of corporate control.
    We do apologize for any confusion, though we have the opinion that this should have been obvious in the context of damage compensation.

    Due to the

  • continuing violations of the rights and properties of C.S. and our corporation, and the Terms of Service (ToS) of our Society for Ontological Performance and Reproduction (SOPR), for example the regulations, which prohibit the
    • support of illegal Free and Open Source Software (FOSS) projects, and
    • provision of services for blacklisted companies,

    and

  • increasing damages for the whole public and us,

    specifically in relation to our coherent Ontologic Model (OM), Ontologic Programming (OP), and Ontologic Computing (OC) (e.g. transformative, generative, and creative Bionics) paradigms, and also Ontologic roBot (OntoBot), OntoBlender and OntoBuilder, Ontologic Search (OntoSearch) and Ontologic Find (OntoFind), and so on, the threshold set by us internally has already been crossed by all entities concerned.
    Furthermore, the legal situation improved significantly once again (see also the note Clarification Cloud 3.0 'R' Us #4 of today).
    Eventually, we are waging about the change of the minimal ratio of all shares (see first section) to 64% + 36%, as already discussed in the last past.

    Please note that both points are independent of the ongoing discussions regarding the establishment of new companies as joint ventures.
    We do understand that companies have their reasons to act in these ways, but they also have to acknowledge that these actions damage the goals and even threaten the integrities of C.S. and our corporation. In addition, the support of competitors of the proposed joint ventures is contradictory to the intention of the designated joint venture partners.
    But we do have the rights and properties since 1999 and 2006, and therefore no ambiguous legal situation and no race situation do exist at all.

    13:47 UTC+2
    Clarification Cloud 3.0 'R' Us #4

    *** Work in progress - better explanation, order, wording, redundancy with older publications ***

    See the related messages, notes, explanations, clarifications, investigations, and claims, specifically

  • Investigations::Multimedia of the 16th of March 2019,
  • Clarification of the 29th of March 2019,
  • Clarification of the 21st of January 2020
  • OntoLix and OntoLinux Further steps or Clarification of the 3rd of April 2021,
  • Clarification Cloud 3.0 'R' Us as well of the 16th of June 2023,
  • DCos, CnC, aaSx, SDN, SD-WAN, ON, OW, OV, etc. no license of the 27th of February 2024,
  • Clarification Cloud 3.0 'R' Us #2 Just more 'R' Us of the 28th of February 2024,
  • Unix chroot jail before red line of the 16th of March 2024,
  • SOPR does not tolerate illegal infrastructures of the 21st of March 2024,
  • Clarification Cloud 3.0 'R' Us #3 of the 23rd of March 2024,
  • Crackdown of Cloud-native in preparation of the 29th of April 2024,

    and the other publications cited therein.

  • ServerLess technologies (SLx), and
  • Function as a Service (FaaS) technologies (FaaSx),

    But we also noted that operating system-level Virtualization (osV) and containerization is not only about control groups (cgroups and formerly named (process) containers introduced by Google as well), cgroup-based resource container, and namespaces provided by an operating system kernel.

    but

  • container platforms have an own container runtime or use an external runtime, and
  • Function as a Service (FaaS) platforms have an own serverless runtime or use an external runtime respectively platform

    but when looking at runtime once again, it became the subject matter of a note or a clarification of its own and more and more additional evidences emerged, which show that our Evoos and OS have been taken as source of inspiration and blueprint, which provides us the required causal link, and our exclusive moral rights have been infringed. -->

    We quote an online encyclopedia about the subject containerization: "In software engineering, containerization is operating system-level virtualization or application-level virtualization over multiple network resources so that software applications can run in isolated user spaces called containers in any cloud or non-cloud environment [(e.g. supercomputer, cluster, grid)], regardless of type or vendor.

    Comment
    "Containers are a type of software separation technology that allows one or more applications, and their dependent libraries, packages, and services, to run in an operating system created namespace."

    Interestingly, the

  • "term container, while most popularly referring to OS-level virtualization systems, is sometimes ambiguously used to refer to fuller virtual machine environments operating in varying degrees of concert with the host OS, e.g., [a native hypervisor] containers", and
  • terms container engine and container runtime are often confused.

    Especially interesting from the technical and legal points of view is the point about networking of containers with their virtualized kernel and user workloads and applications, because it shows even better, that the Linux kernel

  • has become a Distributed operating system (Dos) in general and
  • is based on our Ontologic System (OS) with its Ontologic System Architecture (OSA) in particular, specifically our OS variant OntoLinux.

    We quote an online encyclopedia about the subject Firecracker: "Firecracker is virtualization software developed by Amazon Web Services. It makes use of KVM.[1][2][3 [With Firecracker, Amazon Web Services reinvents its serverless computing infrastructure and open-source reputation. [27th of November 2018]]]"

    We also quote the website of Firecracker: "Secure and fast microVMs for serverless computing
    Firecracker is an open source [but not free] virtualization technology that is purpose-built for creating and managing secure, multi-tenant container and function-based services.
    Firecracker enables you to deploy workloads in lightweight virtual machines, called microVMs, which provide enhanced security and workload isolation over traditional VMs, while enabling the speed and resource efficiency of containers. Firecracker was developed at Amazon Web Services to improve the customer experience of services like AWS Lambda and AWS Fargate .
    Firecracker is a virtual machine monitor (VMM) that uses the Linux Kernel-based Virtual Machine (KVM) to create and manage microVMs. Firecracker has a minimalist design. It excludes unnecessary devices and guest functionality to reduce the memory footprint and attack surface area of each microVM. This improves security, decreases the startup time, and increases hardware utilization. Firecracker is generally available on 64-bit Intel, AMD and Arm CPUs with support for hardware virtualization.
    Firecracker is used by/integrated with (in alphabetical order): appfleet, containerd via firecracker-containerd, Fly.io, Kata Containers, Koyeb, Northflank, OpenNebula, Qovery, [Solo.io] UniK, Weave FireKube (via Weave Ignite), webapp.io, and [NixOS/Linux] microvm.nix. Firecracker can run Linux and OSv guests. [...]"

    Comment
    See also once again the related publications about the microVirtual Machine (mVM or microVM) platform Kata Containers and similar plagiarisms and fakes of the related parts of our Evoos and our OS.

    We quote an online encyclopedia about the subject Distributed application runtime (Dapr): "Dapr (Distributed Application Runtime) is a free and open source runtime system designed to support cloud native and serverless computing.[2] Its initial release supported SDKs and APIs for Java, .NET, Python, and Go, and targeted the Kubernetes cloud deployment system.[3][4]
    The source code is written in the Go programming language. It is [...] hosted on GitHub.[5 [GitHub - dapr/dapr: Dapr is a portable, event-driven, runtime for building distributed applications across cloud and edge]]

    [Layered] Architectural approach of Dapr:[6]
    Microservice application [] Services written in Go, Python, .NET, ...
    Dapr [APIs] HTTP API / gRPC API
    Dapr [Functionalities] Service-to-service invocation [] State management [] Publish and subscribe [] Resource bindings & trigger [] Actors [] Distributed tracing [] Extensible ...
    Any cloud or edge infrastructure "

    Comment
    Note the parts

  • Publish and subscribe, which reminds us of a BlackBoard System (BBS) (central space of a Multi-Agent System (MAS) (e.g. System of Loosely Coupled Applications and Services (SLCAS), Tuple Space System (TSS) (e.g. JavaSpaces), Linda-like System (LlS), etc.), Java Jini, Maude, etc.), and
  • Actor Model, which were taken from our OS, because TUNES OS and Aperion (Apertos (Muse))) are Actor Model-based (concurrent and lock-free or non-blocking).

    Of course, Dapr is included in our OS.

    Also note that Dapr and gVisor are written in the Go programming language, developed by ... Google.

    Comment
    The various runtimes with their specific designs and functionalities show that

  • Evoos and OS were taken as source of inspiration and blueprint, and their collections, integrations, architectures, components, etc. were implemented without presenting a modification as an own expression of idea, and
  • many variants have been implemented to cover or capture all variants created with our OS and all possibilities to encircle and blackmail us. But this also proves that all (relevant) variants have already been created with our Evoos and our OS.

    We can also see once again the move to Dos, specifically our other expression of idea to .
    osV or containerization confused as ... -->

    We also see that container engines (e.g. Docker, Rocket or rkt, Kubernetes, Openstack), and runtimes (e.g. Dapr and gVisor) communicate with a proxy by gRPC (introduced with Istio 0.8 also by Google and IBM as alternative to REST for microServices) in the Kata Containers architecture, which for sure is also a part of our Ontologic System Architecture (OSA).
    A similar / the same solution is required and utilized for the interoperabiltiy between

  • Event-Driven Architecture (EDA), and
  • ServerLess technologies (SLx) and Function as a Service (FaaS) technologies (FaaSx).
    As we said, this leads to Distributed operating system (Dos) (with or without Actor Model (AM)) and BlackBoard System (BBS) (with or without Multi-Agent System (MAS)) and also to our integration of both, which includes our Java JiniOS for example.

    Kernel-Less technologies (KLx), ServerLess technologies (SLx), Event-Driven Architecture (EDA), and Space-Based Architecture (SBA), also with Kubernetes and microVM
    FaaS means more microService technologies (mSx), Resource-Oriented technologies (ROx), and operating system (services)
    Dapr cloud-native=Kubernetes, mSx, and SLx

    But when taking the holistic view, then one can easily recognize that our Evoos and our OS were and still are used as sources of inspiration and blueprints without authorization, referencing, and so on.
    This and other observations show that the essential parts of our Evoos and our OS were and still are distributed over several projects and even over several fields (e.g. os kernel, aaSx, etc.).

    But they must utilize the exclusive and mandatory infrastructures of our Society for Ontological Performance and Reproduction (SOPR) and our other Societies with their set of fundamental and essential facilities, technologies, goods, and services, including IaaS, SoftBionics as as Service (SBaaS), and so on.

    KVM transforms Linux into a hypervisor, Virtual Machine Monitor (VMM), or virtualizer of type 1, also known as native or bare metal hypervisor for full or native virtualization, which enables a user to run multiple isolated environments.
    Linux has also namespaces, cgroups, LXC, etc.. But missing is or was, what we have added the Distributed os.

    And while looking in more detail at runtime, we got the impression that Docker and Co. are an extension of Linux, which makes it a Dos.

    container is part of agent
    orchestration is part of Multi-Agent System (MAS)

    We quote the document titled "Kubernetes' Architecture Deep Dive" and publicized on the 7th of May 2019: "[...]
    [...]

    Kubernetes on a high-level
    Kubernetes lets you efficiently declaratively manage your apps at any scale

    Most importantly: What does "Kubernetes" mean?
    = Greek for "pilot" or "helmsman of a ship" [see quote of online encyclopedia below]

    What is Kubernetes?
    = A Production-Grade Container Orchestration System

  • A project that was spun out of Google as an open source container orchestration platform (~2 billion containers/week).
  • Built from the lessons learned in the experiences of developing and running Google's Borg and Omega.
  • Designed from the ground-up as a loosely coupled collection of components centered around deploying, maintaining and scaling workloads.

    What Does Kubernetes do?

  • Known as the linux kernel of distributed systems. [Big BadaBingo!!!]
  • Abstracts away the underlying hardware of the nodes and provides a uniform interface for workloads to be both deployed and consume the shared pool of resources.
  • Works as an engine for resolving state by converging the actual and the desired state of the system.

    Kubernetes is self-healing
    Kubernetes will ALWAYS try and steer the cluster to its desired state.

  • Me: "I want 3 healthy instances of Redis to always be running."
  • Kubernetes: "Okay, I'll ensure there are always 3 instances up and running."
  • Kubernetes: "Oh look, one has died. I'm going to attempt to spin up a new one."

    What can Kubernetes REALLY do?

  • Autoscale Workloads
  • Blue/Green Deployments
  • Fire off Jobs and scheduled CronJobs
  • Manage Stateless and Stateful Applications
  • Built-in Service Discovery
  • ~Easily integrate and support 3rd party apps~

    Most Importantly ...
    Use the SAME API across bare metal and EVERY cloud provider!!!

    [...]

    Kubernetes is a "platform-platform"
    [A first developer:] "Kubernetes is a platform for building platforms. It's a better place to start; not the endgame."
    [A second developer:] "Kubernetes is not the endgame but for those using it as their foundation it's a pretty damn good place to start. My advice, build a platform as API service extension. [...]"

    [...]

    etcd, the key-value datastore

  • etcd acts as the cluster datastore.
  • A standalone incubating CNCF project
  • Purpose in relation to Kubernetes is to provide a strong, consistent and highly available key-value store for persisting all cluster state.
  • Uses "Raft Consensus" among a quorum of systems to create a fault-tolerant consistent "view" of the cluster.

    kube-controller-manager, the reconciliator
    [...]

    kube-scheduler, the placement engine
    [...]

    kubelet, the node agent
    [...]

    Container Runtime, the executor

  • A container runtime is a CRI (Container Runtime Interface) compatible application that executes and manages containers. [Obviously, it is not an application, but more an operating system component.]
    • Docker (default, built into the kubelet atm)
    • containerd [(from Docker)]
    • cri-o
    • rkt [or Rocket]
    • Kata Containers (formerly clear [containers] and hyper[.sh])
    • Virtlet (VM CRI compatible runtime)

    [...]

    Kubernetes Networking

  • Pod Network (third-party implementation)
    • Cluster-wide network used for pod-to-pod communication managed by a CNI (Container Network Interface) plugin.
  • Service Network (kube-proxy)
    • Cluster-wide range of Virtual IPs managed by kube-proxy for service discovery.

    [...]"

    Comment
    "What Does Kubernetes do? Known as the linux kernel of distributed systems." Q.E.D. Quod Erat Demonstrandum==Which was to be proven.

    Kubernetes has already stolen the Key-Value (K-V) store in relation to Cluster Computing (CC or ClusterC), which is also a part of Space-Based Architecture (SBA).
    etcd seems to be the successor of the K-V store of Kubernetes and a BlackBoard System (BBS) (central space of a Multi-Agent System (MAS) (e.g. System of Loosely Coupled Applications and Services (SLCAS), Tuple Space System (TSS) (e.g. JavaSpaces), Linda-like System (LlS), etc.), Java Jini, Maude, etc.), which is a part of Space-Based Architecture (SBA) and what we described as Java JiniOS.
    See also once again the note Unix chroot jail before red line of the 16th of March 2024.
    Like the container runtime environment ((low-level and high-level) runtime or daemon) was separated from the containerization systems, containerization engines, or container engines (e.g. Docker), the K-V store was separated from the container clustering, and container orchestration and scheduling systems (e.g. Docker Swarm, Docker Compose, and Kubernetes) to make them components of a Distributed operating system (Dos) based on our Ontologic System (OS) with its Ontologic System Architecture (OSA).

    We quote and translate an online encyclopedia about the subject Kybernetik==Cybernetics: "According to its founder Norbert Wiener, cybernetics is the science of controlling and regulating machines and their analogy to the actions of living organisms (due to feedback through sensory organs) and social organizations (due to feedback through communication and observation). It was also described with the formula "the art of control". The term "Kybernetik" was adopted into the German language in the middle of the 20th century from the English word cybernetics. It contains the Greek word [...] kybernetes for helmsman.
    [...]"

    Comment
    We also would like to give the additional explanation to our fans and readers, who do not speak German, that the pronounciation of Kyberneti in German and Kubernete in English sounds the same.
    But the quote shows, what we always had as impression, that Kubernetes is related to the field of cybernetics.

    We quote the document titled "CNCF Overview" and publicized in 2020: "[...]

    Serverless in CNCF
    Decomposing Serverless

  • Serverless Working Group published an influential whitepaper
  • Attributes that developers love about closed serverless platforms (which already run on containers):
    • Infinite scalability
    • Microbilling
    • Easy app updates
    • Event-driven architectures [(EDAs)]
    • Zero server ops
  • Several projects are decomposing these into features to be available on top of Kubernetes

    Serverless Landscape & CloudEvents

  • [...]
  • CloudEvents, a new CNCF project, is a common model for event data to ease cross-provider event delivery [required as messaging grid of Space-Based Architecture (SBA) together with etcd as BlackBoard System (BBS) and for disaggregation]

    [...]

    Why Organizations Are Adopting Cloud Native
    1. Better resource efficiency lets you to run the same number of services on less servers
    2. Improved resiliency and availability: despite failures of individual applications, machines, and even data centers
    3. Cloud native allows multi-cloud (switching between public clouds or running on multiple ones) and hybrid cloud (moving workloads between your data center and the public cloud)
    4. Cloud native infrastructure enables higher development velocity - improving your services faster - with lower risk

    [...]

    Certified Kubernetes Conformance

  • CNCF runs a software conformance program for Kubernetes
    - Implementations run conformance tests and upload results
    - Mark and more flexible use of Kubernetes trademark for conformant implementations
    - [...]

    [...]

    Training and Certification
    Training

  • [...] Kubernetes course [...]
  • [...] Kubernetes Fundamentals course

    Certification

  • [...] Certified Kubernetes Administrator (CKA) [...]
  • [...] Certified Kubernetes Application Developer (CKAD) [...]

    Kubernetes Certified Service Provider [Certified Kubernetes Service Provider (CKSP)]
    [...]

    [...]

    CNF Testbed

  • [...]
  • Compare performance of:
    - Virtual Network Functions (VNFs) on OpenStack, and
    - [Cloud-native] Cloud native Network Functions (CNFs) on Kubernetes
  • Identical networking code packaged as:
    - containers, or
    - virtual machines (VMs)
  • Running on top of identical on-demand hardware from the bare metal hosting company
  • [...]

    [...]

    KubeCon + CloudNativeCon Attendance
    [...]

    [...]"

    We quote the document titled "CNCF Overview" and publicized in 2024: "[...]

    KubeCon + CloudNativeCon
    [...]

    [...]

    Millions of Trained and Certified Professionals

  • Kubernetes [Massive Open Online Courses (]MOOC[)] [...]
  • Certified Kubernetes Administrator (CKA) [...]
  • Certified Kubernetes Application Developer (CKAD) [...]
  • Certified Kubernetes Security Specialist (CKS) [...]
  • Kubernetes and Cloud Native Associate Exam (KCNA) [...]
  • Additional Courses include:
    - [...]
    - Intro to Serverless on Kubernetes
    - [...]

    [...]"

    We also quote a statement made by the Chief Executive Officer (CEO) of the company Juniper Networks in February 2024: "The cloud isn't just a technology but rather an entirely new operating model that can yield tremendous agility, cost efficiencies and better user experiences. [...]"

    Comment
    We have already shown that cloud-native means nothing else than based on our Ontologic System (OS) and Ontologic Applications and Ontologic Services (OAOS) and by the obvious fact that cloud-native and Kubernetes are one, we have also the proof that cloud-native and Kubernetes are nothing else than the related part of our OS, as we have already shown in relation to microService technologies (mSx), ServerLess technologies (SLx), Event-Driven Architecure (EDA), and Space-Based Architecture (SBA), Cluster Computing (CC or ClusterC), and also Multi-Agent System (MAS), and their various individual and overall integrations according to our Ontologic System Architecture (OSA).

    And having said this and together with our other insights, explanations, and investigations, Alphabet (Google) has been convicted as (one of) the responsible entities together with IBM, Intel, Cisco, Dell Technologies, VMware, etc., and also Amazon, Microsoft, and Co..

    We will not become a member of the Linux Foundation (LF) and its Cloud Native Computing Foundation (CNCF), and we will not give a member of the LF or CNCF the allowance and license for the performance and reproduction of certain parts of our OS.

    We also would like to note that all entities concerned, specifically the companies listed in the note Crackdown of Cloud-native in preparation of the 29th of April 2024, will have a very difficult position to convince a judge that a benefit for the public is provided by

  • stealing the original and unqiue ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S.,
  • simulating an ordinary technological progress or technical benefit for the public, which is not related to C.S. and our corporation, and
  • conducting other fraudulent and even serious criminal acts.

    Even more difficult is the task to explain the fact that we have publicized in 1999 and 2006, and explained since 2006 what they have implemented and claimed as their creations, innovations, works, and achievements, specifically when it is more than obvious that they have known these publications and explanations.

    And what always makes us speechless is that they did and are still doing the same in relation to other parts of our Evoos and our OS, specifcially in relation to Bionics (e.g. AI, ML, CI, ANN, CV, CA, ABS, MAS, CAS, etc.).
    No, this is not working this way.

    14:22 UTC+2
    Crackdown of Linux in preparation

    We quote the document titled "CNCF Overview" and publicized in 2020: "[...]
    Today the Linux Foundation is much more than Linux

    Security
    We are helping global privacy and security through a program to encrypt the entire internet.

    Networking
    We are creating ecosystems around networking to improve agility in the evolving software-defined datacenter.

    Cloud
    We are creating a portability layer for the cloud, driving de facto standards and developing the orchestration layer for all clouds.

    Automotive
    We are creating the platform for infotainment in the auto industry that can be expanded into instrument clusters and telematics systems.

    Blockchain
    We are creating a permanent, secure distributed ledger that makes it easier to create cost-efficient, decentralized business networks.

    Web
    Node.js and other projects are the application development framework for next generation web, mobile, serverless, and IoT applications.

    Comment
    We also noted that the Linux kernel becomes more and more our OS. For example, with container runtimes, Kubernetes, and event-based or Event-Driven Architecture (EDA), ServerLess technologies (SLx), it is the Distributed operating system (Dos) part of our OS to a considerable extent.

    Simply said, the Linux Foundation is a considerable part of our corporation.
    Simply said, the CNCF is the part of our business units Ontonics and Ontologics, which has become our Society for Ontological Performance and Reproduction (SOPR).

    With its activities and projects the Linux Foundation is infringing the exclusive moral rights respectively Lanham (trademark) rights of C.S., specifically by

  • obmitting referencing respectively citation with attribution of the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S., whereby to be referenced was the reason for our initial support of FOSS, which now has been completely axed, and
  • interfering with, and also obstructing, undermining, and harming the exclusive exploitation (e.g. commercialization (e.g. monetization)) of said original and unique AWs and further IPs.

    And blacklisting us on its mailing list, specifically to avoid that other developers get the facts, is a total affront. One always meets twice in life. Is not it? :)

    We also would like to note that all entities concerned will have a very difficult position to convince a judge that a benefit for the public is provided by

  • stealing the original and unqiue ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S.,
  • simulating an ordinary technological progress or technical benefit for the public, which is not related to C.S. and our corporation, and
  • conducting other fraudulent and even serious criminal acts.

    Even more difficult is the task to explain the fact that we have publicized in 1999 and 2006, and explained since 2006 what they have implemented and claimed as their creations, innovations, works, and achievements, specifically when it is more than obvious that they have known these publications and explanations.

    Last but not least, we also recall our opinion that FOSS has exactly become the same monster, that it was fighting against at the beginning.


    09.May.2024

    16:03 and 25:55UTC+2
    Open Grid Alliance even serious criminal

    The so-called Open Grid Alliance emphasizes the disaggregated data center and the related disaggregated Network operating system (Nos). But both are nothing new.

    We quote the document titled "Cray Technical Workshop and publicized in 2006: "Cascad System Architecture

  • Globally addressable memory with unified addressing architecture
  • Configurable network, memory, processing and [Input/Output (]I/O[)]
  • Heterogeneous processing acroos node types, and with MVP nodes
  • Can adapt at configuration time, compile time, run time"

    Comment
    In fact, the hardware of the disaggregated data center resembles the hardware of a supercomputer, including

  • processing component or compute node,
  • memory component (e.g. Remote Direct Memory Access (RDMA), RDMA over Ethernet, our RDMA over TCP/IP with or without exception-less system call and asynchronous I/O without context switch, Fabric-Attached Memory (FAM), memory pool),
  • storage component (e.g Storage Area Networks (SAN) and Network-Attached Storage (NAS), and also Direct Attached Storage (DAS)), and
  • network component or interconnect.

    What is wrongly called Nos is a part of our Evoos and our OS. Infact, our Evoos and our OS are based on the fields of

  • Distributed operating system (Dos) in general,
  • Associative Memory (AM), BlackBoard System (BBS), Tuple Space System (TSS), Linda-like System (LlS), Multi-Agent System (MAS), and MAS agency or Agent Society (AS), and also Autonomic Cognitive Computing (ACogC), and Holonic Agent System (HAS) in particular (see the Clarification of the 14th of April 2024), which again are based on the fields of
    • Bionics,
    • Cybernetics for feedback loop,
    • Synergetics,
    • Holonics and Bioholonics, and
    • Parallel Computing (PC or ParaC) for parallel processing,

    and our OS is also based on the field of

  • Kernel-Less operating system (KLos) (e.g. Systems Programming using Address-spaces and Capabilities for Extensibility (SPACE) and Kernel-Less Operating System (KLOS)),

    and if we take for example the operating system UNICOS of Cray in the variant Cray Linux Environment (CLE) with the

  • compute elements running Compute Node Linux (CNL), which is a customized Linux kernel, and
  • service elements running [an ordinary] Linux Enterprise Server

    for a variant of our OntoLinux, then we get a kernel-less, scalable, disaggregated split kernel Nos.

    We also showed that we created with our

  • Evoos the
    • operating system Virtual Machine (osVM),
    • operating system-level virtualization (osV) or containerization,
    • microVirtual Machine (mVM),
    • Network Virtualization (NV),
    • (foundation of) Peer-to-Peer Virtual Machine (P2PVM),
    • (foundation of) microService technologies (mSx), including microService-Oriented technologies (mSOx),
    • (foundation of) as a Service (aaS) technologies (aaSx),
    • (foundation of) Cloud, Edge, and Fog Computing and Networking (CEFCN), also called Cloud Computing of the third generation (CC 3.0) by us only for better understanding,
    • (foundation of) Software-Defined Networking (SDN), Network Function Virtualization (NFV), and Virtualized Network Function (VNF), and
    • (foundation of) Cloud-native Computing and Networking (CnCN) with Cloud-native Network Function (CNF), as wrongly called by others, including the integration and combination of SDN with NFV, and VNF, and also CNF (SDN-NFV-VNF-CNF),
    • (foundation of) 5th Generation mobile networks or 5th Generation wireless systems (5G) New Radio (5G NR),
    • (foundation of) 5th Generation mobile networks or 5th Generation wireless systems (5G) of the Next Generation (5G NG),
    • (foundation of) Grid operating system (Gos), Cloud operating system (Cos), and what is wrongly called Data Center operating system (DCos),
    • Autonomic technologies (Ax), including Autonomic Computing (AC) and Autonomic Networking (AN),
    • Resource-Oriented technologies (ROx), including Resource-Oriented Computing (ROC) and Resource-Oriented Networking (RON),
    • (foundation of) Grid operating system (Gos), Cloud operating system (Cos), and what is wrongly called Data Center operating system (DCos),
    • (foundation of) Network Artificial Intelligence (NAI),
    • (foundation of) Cloud-native technologies (Cnx), including Cloud-native Computing (CnC) aadn Cloud-native Networking (CnN),
    • (foundation of) ServerLess technologies (SLx),

    and

  • OS the
    • Kernel-Less operating system (KLos) with the integration of system features like
      • capability-based (see L4, Amoeba, Mach, and Spring),
      • verified,
      • cross-domain communication (see SPACE and Pebble),
      • Virtual Machine Monitor (VMM) or hypervisor,
    • validated and verified split kernel (see also kernel hardening mechanism),
    • polylogarithmically scalable and synchronizable Distributed Computing (DC) or Distributed System (DS),
    • Cloud-native technologies (Cnx),
    • ServerLess technologies (SLx),
    • 6th Generation mobile networks or 6th Generation wireless systems (6G) with our integrations of the fields of
      • network slicing,
      • virtualized and programmable Radio Access Network (RAN), and
      • what is wrongly called Cloud Radio Access Network (RAN),
    • and so on, and correspondingly
    • what is wrongly called network cloud architecture with a software routing framework, which virtually can grow linearly to a large scale from our centralized cloud.

    In addition, our OS also integrates Cluster Computing (CC or ClusterC), while

  • Parallel Computing (PC or ParaC) and Cluster Computing (CC or ClusterC) implies SuperComputing (SC), and
  • Parallel Computing (PC or ParaC) and Distributed Computing (DC) implies SuperComputing (SC) and Cluster Computing (CC or ClusterC).

    We also showed that our OS can even be utilized in the fields of telecommunications and broadcasting (see the OntoLix and OntoLinux Website update of the 10th of March 2019, specifically the document titled "Using JavaSpaces to create adaptive distributed systems", and also the Clarification of the 10th of March 2019).

    Eventually, the result of that aim to rearchitect the Internet is nothing else than stealing an essential part of our Ontologic System (OS) with its Ontologic System Architecture (OSA) and Ontoverse (Ov), including our

  • Ontologic Net (ON) with its Ontologic Net of Things (ONoT), which is the successor of the Interconnected network (Internet), the Internet of Things (IoT), and the Grid Computer (GC or GridC) as the result of the
    • transformation of the Internet into the Universal Space and a Wide Area Network (WAN) supercomputer or Interconnected supercomputer (Intersup) and
    • integration of the fields of Cybernetics, Hard- and SoftBionics, and Robotics with the Universal Space and the Intersup

    as (the architecture for) the Resilient, Bionic, Cybernetic, Robotic, Autonomic Space and Time Computer and Network (RCBRASTCN), or better said Ontologic High Safety and High Security Computer and Network (OHS²CN),

  • Ontologic Web (OW) with its Ontologic Web of Things (OWoT), which is the successor of the World Wide Web (WWW), the Global Brain (GB), the Semantic (World Wide) Web (SWWW), and the Web of Things (WoT), as the result of the
    • transformation of the WWW into the Universal Brain Space and a Wide Area Network (WAN) High Performance and High Productivity Computer and Network (HP²CN) and
    • integration of the Ontologic Net (ON) with the Universal Brain Space and the HP²CN

    as (the architecture for) the Semantic and Cognitive High Performance and High Productivity Computer and Network (SCHP²CN), or better said Ontologic High Performance and High Productivity Computer and Network (OHP²CN), and

  • Ontologic uniVerse (OV) with its Ontologic uniVerse of Things (OVoT), which is the successor of the Physical Reality (PR), the Mixed Reality (MR), the Virtual Reality (VR), the Simulated Reality (SR or SimR), and the Synthetic Reality (SR or SynR) as the result of the
    • fusion of all real and physical, virtual and digital, cybernetical and cyber-physical, and ontological and metaphysical (information) spaces, environments, worlds, and universes respectively realities into the New Reality Environment (NRE) and
    • integration of the Ontologic Net (ON) and the Ontologic Web (OW) with (the realities and) the NRE

    as (the architecture for) the Space and Time Computer or Network (STCN), or better said Ontologic Computer and Network (OCN),
    which collectively are our Ontoverse (Ov), which again is the manifestation of the New Reality (NR) spacetime fabric of our Ontologic System (OS), which again is something entirely or totally new

    with the intentions to

  • refuse the compliance with the Terms of Service (ToS) of our Society for Ontological Performance and Reproduction (SOPR), including the utilization of the exclusive and mandatory infrastructures of our SOPR and our other Societies, and
  • damage the goals and even threaten the integrieties of C.S. and our corporation.

    The companies Broadcom, Dell Technologies, Amazon, AT&T, Deutsche Telekom, Samsung, and Co. should not play dumb and instead prepare the various technical and legal actions, specifically the payment of damage compensations, the transfer of all illegal materials, and the establishment of new companies as joint ventures together with our corporation.

    See also for example the note AI Alliance even serious criminal of the 7th of December 2023.

    18:15 and 24:05 UTC+2
    % + %, % OAOS, % HW #8

    We would like to share some thoughts, considerations, and proposals.
    Our parts, which are everything related with our our Evolutionary operating system (Evoos) and our Ontologic System (OS), come back to us and will be put together again, and then relicensed by our Society for Ontological Performance and Reproduction (SOPR).
    A reimplementation of illegal Free and Open Source Software (FOSS) is not required, because of the transfer of all illegal materials, but merely the reauthorization respectively reallowance and relicensing in compliance with our Terms of Service (ToS) with the License Model (LM) and the Main Contractor Model (MCM) of our SOPR under Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) terms and conditions.
    If we take the suggested and already practiced Business Source License (BSL), has not been decided. But as we said, we do listen and learn from our competent fans, partners, and consultants, hopefully. :D
    The FOSS foundations can remain, but would be acting under the supervision of our SOPR and focus on the community, education, etc., but not the exploitation (e.g. commercialization (e.g. monetization))).

    But we also said that we will join with or takeover of the companies IBM, Oracle, and Broadcom, Hewlett Packard Enterprise(?), and also Apple, Nvidia, and Intel, as well as a lot of other companies, that qualify as Joint Venture Partners (JVPs) due to the

  • reconstitution, restoration, and restitution, and also transition processes,
  • payment of damage compensations, which are the higher of the apportioned
    • triple damage compensations induced, resulting from
      • unpaid royalties for unauthorized performances and reproductions,
      • obmitted referencing respectively citation with attribution, and
      • thwarted, obstructed, blocked, and otherwise missed commercial business possibilities and follow-up opportunities,
    • profit generated illegally, or
    • value (e.g. share price, market capitalization) increased or gained illegally

    by

    • performing and reproducing our Evolutionary operating system (Evoos) and our Ontologic System (OS) in whole or in part without authorization respectively allowance and license, and
    • interfering with, and also obstructing, undermining, and harming the exclusive moral rights respectively Lanham (Trademark) rights of C.S and our corporation,
  • payment of admission fees,
  • payment of outstanding royalties,
  • written admission of guilt,
  • written confirmation of exclusive
    • rights, including
      • moral rights respectively Lanham (Trademark) rights

      and

    • properties, including
      • copyrights,
      • raw signals and data,
      • digital and virtual interests, including
        • screen space,
        • speaker field,
        • online advertisement estate,
      • digital and virtual assets

      of C.S. and our corporation,

  • transfer of all illegal materials,
  • establishment of new companies as joint ventures respectively execution of company takeover or merger, also known as the golden power regulation,
  • utilization of the exclusive and mandatory infrastructures of our Society for Ontological Performance and Reproduction (SOPR) with their set of fundamental and essential facilities, technologies, goods, and services, including the cloud, Ontologic Model (OM), Foundational Model (FM), Capability and Operational Model (COM), Modular Model (MM or ModM), Multimodal Model (MM or MulM), Foundation Model (FM), Large Language Model (LLM) as COM, Global Brain (GB), etc., etc., etc., if and only if no interference with, and also obstruction, undermining, and harm of the exclusive rights and properties of C.S. and our corporation takes place in this way as required by the laws effective, and
  • payment of running royalties for Ontologic Applications and Ontologic Services (OAOS), including access and utilization of certain parts of our Ontologic System Components (OSC), Ontoverse Components (OvC), Ontoscope Components (OsC), etc..

    The core businesses of the JVPs and our corporation remain separated, as long as required and desired by all entities concerned. After all, such a specific JV constitutes a settlement of the damage compensations being due.
    We have already suggested a restructuring, specifically in relation to our

  • Communication and Collaboration System (CoCoS or Co²S),
  • Social and Societal System (SSS or S³),
  • Media System (MS),
  • Video Game System (VGS),
  • Online Advertising System (OAdvS),
  • Electronic Commerce (EC) with Marketplace for Everything (MfE),
  • and other Ontologic Applications and Ontologic Services (OAOS) subsystems and platforms.

    But an important detail is the distribution of the joint costs and profits, which has to be negotiated by all stakeholders, specifically the JVPs with their core businesses.

    23:25 UTC+2
    There is only one OS and Ov #3

    Governments should not fall prey to the fraudulent and even serious criminal marketing of the Information and Communication Technology (ICT) industry.
    The so-called sovereign cloud, sovereign Artificial Intelligence (AI), national digital intelligence, national Large Language Model (LLM), and so on are void from the legal piont of view, because they infringe the rights and properties (e.g. copyright) of C.S. and our corporation.
    And we can see better and better where that total nonsense comes from. All those ICT companies from the U.S.America, European Union (EU), and other locations do not have any rights at all and therefore absolutely no legal security and so on.

    See also the notes

  • German industry associations still in LaLaLand of the 29th of August 2023,
  • Old trick of artificial competition of the 6th of September 2023,
  • There is only one OS and Ov of the 19th of September 2023,
  • U.K. has to comply with ToS of the 6th of October 2023,
  • ICT and Co. still in LaLaLand of the 6th of October 2023,
  • SOPR studied classic idea-expression lawsuits of the 19th of December 2023,
  • SOPR does not tolerate illegal infrastructures of the 21st of March 2024,
  • There is only one OS and Ov #2 of the 1st of April 2024,
  • Legally, InterCloud and IntraCloud are same of the 2nd of April 2024,
  • Damages, if governments ignore ©, ToS, e.g. JV of the 2nd of April 2024,
  • U.S.American and British govs still in LaLaLand of the 8th of April 2024,
  • Our OS, our Ov, our Os, our ToS of the 12th of April 2024,
  • Crackdown of Cloud-native in preparation of the 29th of April 2024,
  • Clarification Cloud 3.0 'R' Us #4 of the 8th of May 2024 (yesterday),
  • Crackdown of Linux in preparation of the 8th of May 2024 (yesterday),

    and the other publications cited therein.


    10.May.2024

    02:05 UTC+2
    Payout to creators business model prohibited in Ov

    Ontoverse (Ov)

    As our Society for Ontological Performance and Reproduction (SOPR) already made clear in relation to similar business situations, where technologies, goods, and services are given away for free, some years ago, either

  • platform operators generate revenues, so that we get our fair share of it, or
  • we
    • take a cut of the revenues of their users, who must be members of our SOPR as well, or
    • estimate the revenues and take a related cut from the platform operators. :)

    Epic Games, Roblox, and Co. should stop playing dumb and instead prepare the various technical and legal actions, specifically the payment of damage compensations, the transfer of all illegal materials, and the actions required by the Terms of Service (ToS) of our SOPR.


    14.May.2024

    18:54 UTC+2
    Further steps

    We are cleaning up this website of OntomaX, specifically the publications related to the very basic expressions of ideas of the works of art created by C.S., the short description of our Ontoverse (Ov) and New Reality (NR), and what is wrongly called Cloud, Linux, etc..

    18:54 UTC+2
    Ontonics Further steps

    We are working on the set of legal documents.

    We are also thinking about the next steps in relation to our corporation, specifically the

  • Society for Ontological Performance and Reproduction (SOPR),
  • proposed and designated Joint Ventures (JVs), and
  • upcoming investments in our undisclosed business units by very large investors (e.g. state wealth funds, private investment companies, etc.) and us.

    22:27 UTC+2
    OpenAI is not creator, maker, inventor of ChatGPT

    OpenAI and comparable companies are the designer, implementer, or manufacturer of a specific variant of our Ontologic roBot (OntoBot) and Ontologic Scope (OntoScope) components with our integrated Multidimensional Multidomain Multilingual Multiparadigmatic Multimodal Multimedia User Interface (M⁶UI), but is not C.S., who is the creator, maker, or inventor of the foundational expression of idea and architecture of the original and unique work of art titled Ontologic System, which underpin its chatbot ChatGPT and comparable plagiarisms and fakes respectively unauthorized performances and reproductions.

    We also recall that our OntoBot, OntoScope, and M⁵UI, belong to the exclusive and mandatory infrastructures of our SOPR and our other Societies with their sets of foundational and essential facilities, technologies, goods, and services, and therefore any allowance and licensing of the performance and reproduction of them is excluded by law and ToS of our SOPR

    It's not a trick. It's a kind of magic. It's Ontologics.
    And a miracle to believe in.

    By the way:

  • The lying press is now in even deeper trouble, if they are continuing with their infringments of the rights and properties of C.S. and our corporation.


    15.May.2024

    18:33 UTC+2
    Comment of the Day

    "If the free press does not tell the truth, then it is obsolete.", [C.S., Today]

    We will not support artists, reporters, publishers, and other entities, if they do not support us, but even fight us. It is that simple. :)

    Welcome to the Ontoverse (Ov).
    And it is always better to collaborate with us.

    19:26 UTC+2
    % + %, % OAOS, % HW #9

    At first, we get all the rights and properties back, and the first step is the payment of the complete damage compensations, which are the higher of the a apportioned compensation, profit, and value, or 2 or more of them. As alternative to an insolvency, we have the transaction of % company shares, including share capital or capital stocks, including voting and prefered shares, or common and preferred stocks, as part of the establishment of new companies as Joint Ventures (JVs), and some other no-brainer legally required actions.
    And then comes, or better said we will have the strong licensing partnership, business partnership, teammate relationship, and so on. :)

    Former issues

  • % + %, % OAOS, % HW of the 28th of February 2024,
  • % + %, % OAOS, % HW #2 of the 13th of March 2024,
  • % + %, % OAOS, % HW #3 of the 23rd of March 2024,
  • % + %, % OAOS, % HW #4 of the 1st of April 2024,
  • % + %, % OAOS, % HW #5 of the 25th of April 2024,
  • % + %, % OAOS, % HW #6 China special of the 29th of April 2024
  • % + %, % OAOS, % HW #7 of the 8th of May 2024, and
  • % + %, % OAOS, % HW #8 of the 9th of May 2024.


    17.May.2024

    13:35 UTC+2
    Mercedes-Benz and Daimler Truck have to comply with ToS

    Terms of Service (ToS)

    Our SOPR will not tolerate any software-defined technologies, goods, and services, which belong to the exclusive and mandatory infrastructures of our SOPR and our other Societies, and any Network HardWare (NHW), which belong to the mandatory Ontoscope Components (OsC), including those for vehicles of all types.

    Partners of the companies Amazon and Nvidia respectively the proposed or even designated Joint Ventures (JVs) Ontonics&Amazon and Ontonics&Nvidia should leave LaLaLand As Soon As Possible Or Better Said Immediately (ASAPOBSI), specifcially when literally spoken moving in the legal scope of ... the Ontoverse (Ov), also known as OntoLand.

    By the way:

  • We would already have sold all of our shares of Daimler, Volvo, and so on.
  • Daimler has become a trojan horse for the U.S.America (USA) and the European Union (EU).


    20.May.2024

    10:31 UTC+2
    Comment of the Day

    "They claim everything, but prove nothing.

    They can claim everyhting, but cannot prove anything.", [C.S., Today]

    18:29 UTC+2
    SOPR remains principled and law-abiding

    We would like to remind the members of the interested public and the proposed and designated partnerships that our Society for Ontological Performance and Reproduction (SOPR) has to

  • abide by its principles
    • original and unique,
    • safe and secure,
    • neutral,
    • Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC),
    • etc.,

    and

  • comply with the
    • national and international laws, regulations, and acts, as well as agreements, conventions, and charters,
    • rights and properties of C.S. and our corporation, and
    • Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) Terms of Service (ToS) with the License Model (LM) and Main Contract Model (MCM) of our Society for Ontological Performance and Reproduction (SOPR),

    which include the provision of the infrastructures of our SOPR and our other Societies with their sets of foundational and essential

  • facilities,
  • technologies,
  • goods, and
  • services.


    21.May.2024

    Comment of the Day

    "I'm sorry, Scarlett. I'm afraid that isn't your voice.", [Sky.net to Scarlett, Today]

    If one does not get it, then substitute Scarlett with Dave.

    11:44 and 17:55 UTC+2
    Clarification #1

    We quote an online encyclopedia about the movie titled "Her" and presented in 2013: "[...]
    [The plagiarist Spike] Jonze conceived the idea in the early 2000s after reading an article about a website that allowed for instant messaging with an artificial intelligence program. [...]
    [...]

    Plot
    [...] Theodore purchases an operating system upgrade that includes a virtual assistant with artificial intelligence, designed to adapt and evolve. He decides that he wants the A.I. to have a feminine voice, and she names herself Samantha. Theodore is fascinated by her ability to learn and grow psychologically. [...]
    [...] Samantha engages a volunteer sex surrogate, Isabella, to stimulate Theodore so that they can be physically intimate. [...]
    [...] Theodore takes Samantha on a vacation, during which she tells him that she and a group of other A.I.s have developed a "hyperintelligent" O.S. modeled after British philosopher Alan Watts. [...]
    [... Theodore] later goes with Amy, who is saddened by the departure of the A.I. from Charles' O.S. [...]
    [...]"

    Comment
    Despite we have only taken a very quick look at that movie "Her" lasting only some seconds or a few minutes, we noted directly that it is also based on our Ontologic System (OS), obviously, which again is based on the fields of biology, epistemology, philosophy, ontology, cybernetics, operating system (os), Distributed System (DS), etc., like many other similar works shown since the presentation of our OS at the end of October 2006.
    [The plagiarist] S. Jonze said at the time that the film was "not about technology or software", rather about finding love and intimacy. We are very sure that our fans and readers will find one or more hearts ♥ on the website of our OS variant OntoLinux.
    Correspondingly, we have listed and put that movie "Her" in our archive for unauthorized performances and reproductions, plagiarisms and fakes, and also pseudo arts.

    From the legal point of view the case of the movie titled "Her" is very similar to or even equals the case of the movie titled "Ready Player One" with its Ontologically Anthropocentric Sensory Immersive Simulation (OASIS), which is nothing else than our Ontologic Collaborative Virtual Environment (OntoCOVE), Anthropocentric Sensory Immersion, and Simulation (OASIS).

    Our OS is not a genre, but a sui generis works of art. If there is a causal link, then there is a violation of the rights and properties of C.S. and our corporation.
    But we even do not need the argument of sui generis in such cases, like the 2 other works of art respectively partial plagiarisms mentioned above, because there is no need to have an

  • extension of an operating system or an O.S., which is modeled after a philosopher, and an evolution of it, but just a software application or a voice assistant, voice-based virtual assistant, or voice-based personal assistant would be sufficient as genre, and
  • ontologically or anthropocentric system, but just a Virtual Reality Environment (VRE) respectively Immersive Virtual Environment (IVE or ImVE), Collaborative Virtual Environment (CVE), or Massively Multi-user Virtual Environment (MMVE) would be sufficient as genre.

    These unnecessary elements already belong to our original and unique, personal, copyright protected distinguishable art assets.

    Furthermore, implementing for example a Large Language Model (LLM), which

  • on the one hand is based on our Evolutionary operating system (Evoos) with our coherent Ontologic Model (OM) and transformative, generative, and creative Bionics, and our Ontologic System (OS) with our Ontologic roBot (OntoBot) and our Ontologic Scope (OntoScope) with our integrated Multidimensional Multidomain Multilingual Multiparadigmatic Multimodal Multimedia User Interface (M⁶UI), Ontologic Desktop (OD) or Semantic Desktop 2.0, and also our connection with the movie "Surrogates" in relation to our integration with the fields of Brain Machine Interface (BMI) and Extended IDentity (xID), and
  • on the other hand resembles the artificially intelligent protagonist of that movie based on Artificial Intelligence (AI)

    constitutes nothing else than a plagiarism and fake of a plagiarism and fake of the related parts of our Evoos and our OS.

    Needless to say, the infringements of the rights and properties of C.S. and our corportion are independent of the voice used for our OS in general and our OntoBot in particular.

    We also quote a report, which is about unauthorized performances and reproductions of our OntoBot and the voice of the artificially protagonist in the movie "Her": "[...]
    When Spike Jonze's romance "Her" was released in 2013, it sounded both like a joke - a man falls in love with his computer - and a fantasy. The iPhone was about six years old. Siri, the mildly reliable virtual assistant for that phone, came along a few years later. You could converse in a limited way with Siri, whose default female-coded voice had the timbre and tone of a self-assured middle-aged hotel concierge. She did not laugh; she did not giggle; she did not tell spontaneous jokes, only Easter egg-style gags written into her code by cheeky engineers. Siri was not your friend. She certainly wasn't your girlfriend.
    [...]
    [...] And on his blog post about the news, [the Chief Executive Officer (CEO) of the company OpenAI] wrote, "It feels like A.I. from the movies; and it's still a bit surprising to me that it's real. Getting to human-level response times and expressiveness turns out to be a big change.""]

    Comment
    We have explained multiple times that the Personalized Assistant that Learns (PAL) and the Cognitive Agent that Learns and Organizes (CALO), which we have referenced in the section Intelligent/Cognitive Interface of the webpage Links to Software of the website of OntoLinux (see the OntoLinux Website update of the 4th of May 2007), were taken by the company Apple for its voice assistant Siri (app presentation in February 2010, iOS integration in October 2011), and that this action was the first undeniable proof that the iPhone is indeed a variant of our original and unique, personal, scientifically fictitious, unforeseeable and unexpected, copyrighted sui generis work of art titled Ontoscope (Os), which was created as part of our original and unique, personal, scientifically fictitious, unforeseeable and unexpected, copyrighted sui generis work of art titled Ontologic System (OS). All relevant facts, have been proven again and again throughout the years.
    According to this report, our OS with its OB and Os were still viewed as a work of art (artistic fantasy), philosophy, and scientific fantasy or science fiction (expression of idea) in 2013.

    These facts also show once again that

  • on the one hand our Evoos and our OS are original and unique, personal, scientifically fictitious, incredibly visonary and prophetic, unforeseeable and unexpected, copyrighted sui generis works of art, which are completely copyright protected in whole or in part, and
  • on the other hand certain actors are only copying, stealing, editing, modifying, and acting in other unwanted way, but are not creating and not presenting a new expression of idea in this and other genres, and instead are merely marketing the creations of others, or being more precise, the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S..

    Luckily the evidences and the laws are crystal clear and protect our rights.

    By the way:

  • The endeavour of certain start-ups is already failing, like the efforts of members of certain Free and Open Source Software (FOSS) foundations with our Kernel-Less operating system (KLos) and Distributed operating system (Dos) variants of Unix, and Linux, and also Windows (see the Comment of the Day of the 2nd of March 2013) (note for example expression of idea, compilation, integration, architecture, components, etc.), also called microServices technologies (mSx), Service-Oriented technologies (SOx), Resource-Oriented technologies (ROx), ServerLess technologies (SLx), Event-Driven Architecture (EDA), Space-Based Architecture (SBA), Cloud-native technologies (Cnx), Grid Computing (GC or GridC), Cluster Computing (CC or ClusterC), SuperComputing (SC or SupC), and so on for better understanding.
  • There is only one OS, which is our OS, and therefore only
    • one OntoBot,
    • one Ontoscope,
    • one OntoLix, and one OntoLinux, and also one OntoWindows,
    • one Ontologic Net, one Ontologic Web, and one Ontologic uniVerse, which collectively are Ontoverse (Ov) and New Reality (NR),
    • one OntoSearch and one OntoFind,
    • one OntoBlender and one OntoBuilder,
    • one and so on,

    but no meta, omni, or whatsoever.

  • There is no monopoly on sounds or noises (e.g. tones, voices).

    18:30 UTC+2
    Clarification #2

    Safe and secure Bionics (e.g. AI, ML, CI, ANN, CAS, etc.) requires at least the same safe and secure social media requires: It must be at least 3 times larger and its costs must be at least 3 times higher to really provide a safe and secure environment: 1 part for the operation and 2 parts for safety and security. And even this is not enough.
    But it will also require 3 times more electric power.
    The situation remains the same since the presentation of our Evoos in 1999 and our OS in 2006: Our corporation together with our various proposed and designated partnerships are the only entities worldwide, that have the legal, technical, and financial capabilities to provide our Bionic, Cybernetic, and Ontonic environment, respectively our original and unique Evoos and OS with our Ontoverse (Ov) and New Reality (NR), with the exclusive and mandatory infrastructures of our Society for Ontological Performance and Reproduction (SOPR) and our other Societies.


    22.May.2024

    14:34, 15:46, and 24:26 UTC+2
    No monopoly on sounds or noises

    We thought that we do not need to discuss this point, when writing this note, but ....
    Publicity rights or right-of-publicity laws protect individuals' likenesses from being stolen or misused and prohibit the unauthorized use of anyone's "name, voice, signature, photograph, or likeness" for the purposes of "advertising or selling, or soliciting purchases of, products, merchandise, goods or services".

    Indeed, we had a trademark case with the domain "michaelschumacher.de" in the F.R.Germany, which eventually was granted to Michael Schumacher on the base that he is a public figure. But this was only legally possible, because the initial owner of this domain was a private person.

    In the case of a plagiarism and fake of the plagiarism and fake movie titled "Her", we do have 2 professional actresses involved, so that we do not have the situation, in which only one of them has a viable right of publicity.
    If 2 persons have eerily similar voices, then the society cannot give one of these 2 persons a monopoly on her, his, or their sounds or noises. That is not the way it works.

    If at all, then the creator of the movie "Her" could claim for a copyright infringement. But a protagonist based on Artificial Intelligence (AI) is a genre, which is not protected due to the scènes à faire doctrine or exception clause of the copyright law.
    The

  • voice in the movie "Her" is not a person's identity or public persona,
  • unknowing viewer even gets no proof that the voice in the movie "Her" is truly the voice of the actress Scarlett Johansson,
  • movie "Her" is not a movie of S. Johansson,
  • voice assistant Samantha is not S. Johansson, and
  • plagiarist has not sought a resemblance of the voice of S. Johansson and an attribute of S. Johansson's identity in contrast to the company Ford Motors and the songs, the voice, and the identity in the case of the singer Bette Midler.

    Is not it?

    Once again: One has to compare individual or personal voiced works with pieces of music or compositions, which means the test has to be done on a higher metalevel of the individual or personal voices, but not on the very basic or elementary level of the sounds or noises.
    But a voice alone is not sufficient to show the required causal link. In a film, we also need the picture or the story, in a song, we also need the vocal nuance, timbre, and cadence, or the lyrics, while in other cases, we need more of this or that evidence.
    If no causal link exists, then no infringement has taken place.

    In music, one has to create a sequence of tones on her, his, or their own, but not make a copy of a record of this sequence, and create an own new expression of idea. In other cases, one has to create a synthetic voice or ask another person to record her, his, or their voice to avoid a causal link.

    Eventually, these laws do protect trademarks, copyrights, and publicity rights, but do not grant a monopoly on the genres, the sounds or noises (e.g. tones, voices), and so on. It is like with compiling in pop art, sampling in music, and scene making in other art forms.

    We quote an online encyclopedia about the subject impressionist in entertainment: "An impressionist or a mimic is a performer whose act consists of imitating sounds, voices and mannerisms of celebrities and cartoon characters. The word usually refers to a professional comedian/entertainer who specializes in such performances and has developed a wide repertoire of impressions, including adding to them, often to keep pace with current events. Impressionist performances are a classic casino entertainment genre.
    Someone who imitates one particular person without claiming a wide range, such as a lookalike, is instead called an impersonator. [...]
    [...]
    Usually the most "impressive" aspect of the performance is the vocal fidelity to the target - usually a politician or a famous person. Props may also be employed, such as glasses or hats, but these are now considered somewhat old-fashioned and cumbersome: the voice is expected to carry the act.
    Because animated cartoons often lampoon famous people (sometimes obliquely), a facility for impressions is one of the marks of a successful voice actor. Many cartoon characters are intended to be recognized by the audience as evoking a specific celebrity, even when not explicitly named. With such indirect references, the entertainment value does not lie so much in the technical achievement of exactly reproducing the voice so much as in merely making it recognizable; the joke lies in the reference to a celebrity, not in its rendition."

    Comment
    And last but not least, C.S. is allowed a lot within the scope of the artistic freedom in general and in relation to the performance and reproduction of the original and unique expression of idea and work of art titled Ontologic System, but nobody else, in particular.

    There is no need for an endless debate, because the legal situation is absolutely crystal clear.

    See the related messages, notes, explanations, clarifications, investigations, and claims, specifically

  • Bette Midler, Tom Waits, Lynn Goldsmith vs. C.S. of the 11th of August 2023,
  • Do not confuse creative and commercial aspects of the 7th of April 2024,
  • SOPR considering action due to ELVIS Act of the 1st of May 2024,
  • Clarification #1 of the 21st of May 2024 (yesterday),
  • Comment of the Day of the 21st of May 2024 (yesterday),

    and the other publications cited therein.

    This is progress.
    Welcome to the Ontoverse (Ov).


    23.May.2024

    02:04 and 14:00 UTC+2
    All deals with OpenAI and Co. are void

    We would like to recall once again that in the legal scope of ... the Ontoverse (Ov), no other entity than our Society for Ontological Performance and Reproduction (SOPR) is allowed by the

  • national and international laws, regulations, and acts, as well as agreements, conventions, and charters,
  • rights and properties of C.S. and our corporation, and
  • Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) Terms of Service (ToS) with the License Model (LM) and Main Contract Model (MCM) of our Society for Ontological Performance and Reproduction (SOPR),

    to make partnerships, agreements, deals, and so on, and therefore any conclusion of such a partnership, agreement, deal, and so on is void and must be renegotiated, if reasonable and required at all.

    Eventually, no entity will substitute our SOPR, take our control, and make the decisions for us, because no other entity is authorized with the consent and on behalf of C.S. to represent the rights and properties of C.S. and our corporation.

    We also would like to remind every entity concerned that the ToS demands the unrestricted access to raw signals and data handled with or in our Ontologic System (OS) or with our Ontoscope (Os), which for example comprise all raw signals and data generated since the existence of the Os variants Apple iPhone, Android smartphone, and so on.

    By the way:

  • Why are certain companies still financing start-ups in the field of Bionics (e.g. AI, ML, CI, ANN, CAS, etc.) and misusing them as proxies, despite our SOPR has shown that their
    • technologies, goods, and services are plagiarisms and fakes,
    • technologies, goods, and services belong to the exclusive and mandatory infrastructures of our SOPR and our other Societies,
    • technologies, goods, and services ave been blacklisted by our SOPR,
    • actions infringe the exclusive rights and properties of C.S. and our corporation,

    and our SOPR has explicitly demand in various ways that they do not get support and services by investors and service providers anymore?

    02:22 UTC+2
    Sony Music is wrong

    We would like to recall once again that there is absolutely no question that our Evolutionary operating system (Evoos) with its Evolutionary operating system Architecture (EosA) and coherent Ontologic Model (OM) and our Ontologic System (OS) with its Ontologic System Architecture (OSA) and Ontologic System Components (OSC), including our Ontologic roBot (OB or OntoBot), were at the times of their individual creations and still are original and unique, personal, scientifically fictitious, incredibly visonary and prophetic, unforeseeable and unexpected, copyrighted sui generis works of art, which means that C.S. does have the right to use the songs of Sony Music and all works of every other author, publisher, etc. to model, train, configure, shape, teach, etc. the related part of our OM and our OB, because they constitute transformative and new expressions of idea and therefore enjoy the fair use doctrine or exception clause of the copyright law.
    The reason behind this fair use doctrine is that every work of art builds on one or more older works of art and the copyright cannot be a monopoly, which can block any further creation, in contrast to a patent, which can block any further invention, though exceptions exist in case of a patent as well.
    It is just the little 1×1 or einmaleins==basics==multiplication tables of copyright. Period. :)

    Welcome to the Ontoverse (Ov).

    15:45 UTC+2
    Hoffman and Greylock AI start-ups blacklisted


    24.May.2024

    13:42, 14:21, and 16:42 UTC+2
    OntoCoin, OntoTaler, Qoin official currencies in Ov in U.S.A., F.R.G., and F.R.

    We are very pleased to share our decision that our OntoCoin, OntoTaler, and Quantum Coin©™ (Qoin©™) are now the official currencies in the legal scope of ... the Ontoverse (Ov) in the United States of America (U.S.A.), the Federal Republic of Germany (F.R.G.), and the French Republic (F.R.).
    We will also take the same actions in other countries, unions of states, and economic zones, if required by the

  • national and international laws, regulations, and acts, as well as agreements, conventions, and charters,
  • rights and properties of C.S. and our corporation, and
  • Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) Terms of Service (ToS) with the License Model (LM) and Main Contract Model (MCM) of our Society for Ontological Performance and Reproduction (SOPR),

    which are already the compromise.

    In this regard, we also would like to recall that our

  • SOPR does not hook underlying illegal blockchains into the Universal Ledger (UL) and does not provide Ontologic Applications and Ontologic Services (OAOS) of the exclusive and mandatory infrastructures of our SOPR and our other Societies with their sets of foundational and essential facilities, technologies, goods, and services,
  • Ontologic Bank (OntoBank) with our Ontologic Exchange (OntoEx) will not exchange, trade, transfer, or provide any other services in relation to illegal cryptocurrencies, such as Bitcoin, Ether, Dogecoin, and Co., and financial products based on it, and
  • SOPR has further measures available and will take legal actions against all responsible entities to enforce the rights and properties of C.S. and our corporation,

    as have been discussed and announced multiple times.

    Please, also note that our SOPR does not have to provide essential facilities, technologies, goods, and services, if such an activity interferes with, and also obstruct, undermine, or harm the exclusive moral rights respectively Lanham (Trademark) rights of C.S. and our corporation, which is obviously the case with everything related to the illegal cryptocurrencies and the so-called Decentralized Web (DWeb). :)

    By the way:

  • We have the impression that the approvals of certain financial products based on illegal cryptocurrencies in the U.S.America is politically motivated despite better knowledge about the true legal situation.
    Our SOPR is already working on the measures.
  • The shiba inu Kabosu has died today.

    14:04 and 19:11 UTC+2
    % + %, % OAOS, % HW #11

    Our Society for Ontological Performance and Reproduction (SOPR) noted that the adaption of more essential elements of our Evoos and our OS is already ongoing for some time, because one cannot realize the various facilities, technologies, goods, and services based on our Evolutionary operating system (Evoos) and our Ontologic System (OS) in some few months.
    But honestly, we have to admit that we anticipated this development since some years.
    Howsoever, our SOPR adjusted fees and ratios of the special offer option.

    The ratios of the special fast track damage compensation option or requirement below are developing, are not yet determined, and only indicate the direction of decision making and therefore are non-binding.

    We decided for +% to % + % in case of the ratio in relation to shares of proposed Joint Ventures (JVs), because of the continuation of avoidable damages and control, and the incorporation of new insights of older decisions and actions:
    64% + 36% in case of Microsoft and Nvidia
    60% + 40% in case of others

    We also would like to recall that the fees for HardWare (HW) does not include fees for SoftWare (SW). The height of the damage compensations and royalties have to be calculated accordingly.
    +3% to 10% in case of Grid Computing (GC or GridC), Cloud Computing (CC or CloudC), SuperComputing (SC or SupC), Data Center (DC) HW, because of adjusting to profits

    The underlying business paradigm of Bionic, Cybernetic, and Ontonic (BCO) SuperComputing (SC or SupC) (BCOSC) and SuperNetworking (SN or SupN) (BCOSN), including Artificial Intelligence (AI) SuperComputing (SC or SupC) (AISC), Machine Learning (ML) SuperComputing (SC or SupC) (MLSC), Computational Intelligence (CI) SuperComputing (SC or SupC) (CISC), Artificial Neural Network (ANN) SuperComputing (SC or SupC) (ANNSC), etc., includes

  • foundational layer - large generative models
  • middle layer - specialized or domain specific Bionic engines, including AI engines, and
  • application and service layer - Bionic-based enhancing software tools,

    and is based on our Evoos and our OS, and our Ontologic Computing (OC) and Ontologic Networking (ON) paradigms (exception proves the rule), including the integration of both the

  • software paradigm
    • 1. pre-training, unsupervised learning of Artificial Neural Network (ANN), including
      • Recurrent Neural Network (RNN),
      • transformer, reformer, general transducer (e.g. perceiver),
      • Random Walk Key-Value (RWKV) (linear transformer without self-attention, but recurrent attention mechanism),
      • Retentive Networks (RetNet) (linear transformer and RNN without softmax function),
      • etc.,
    • 2. fine-tuning,
    • 3. pretrained coherent Ontologic Model (OM) (e.g. Foundation Model (FM), Foundational Model (FM), Capability and Operational Model (COM), MultiModal (MM) Artificial Neural Network (ANN) Model (MMANNM), Large Language Model (LLM), transformative, generative, creative Artificial Intelligence Model),
    • 4. aligning, (active) learning, instructing, and grounding, and also validating and verifying
      • Reflection (Self-Play), Multi-Agent Reinforcement Learning (MARL),
      • feedback, Human-in-the-Loop (HiL), Reinforcement Learning from Human Feedback (RLHF), Reinforcement Learning from Artificial Intelligence Feedback (RLAIF),
      • etc.,
    • 5. Ontologic roBot (OntoBot),
    • 6. (active) learning and also validating and verifying (e.g. fact checking, halucination mitigating) on-the-fly,

    and

  • hardware paradigm
    • pre-training, unsupervised learning,
    • BCOSC,
      • AISC,
      • MLSC,
      • ANNSC,
      • etc.,
    • Graphics Processing Unit (GPU),
    • Intelligence Processing Unit (IPU),
      • Tensor Processing Unit (TPU),
      • Vision Processing Unit (VPU),
      • Neural Processing Unit (NPU),
    • other Accelerated Processing Unit (APU).

    +10% to 17% in case of GC, CC, SC, DC HW for BCO of Joint Venture (JV)
    +20% to 27% in case of GC, CC, SC, DC HW for BCO, because of adjusting to profits, which are much higher in at least one case above 25% over 36% up to 48%, and fair share to the overall costs of the damages, the infrastructures, and the Ontoverse (Ov)
    But this decision might change in individual cases depending on 2 aspects:

  • The one aspect is the ratio of the share distribution, if a JV will be established.
  • The other aspect is the share of the HW in the overall revenue and profit results.

    Ontologic Applications and Services (OAOS)
    Bionic, Cybernetic, and Ontonic Computing (BCOC) and Networking
    Grid Computing (GC or GridC) and Networking
    Cluster Computing (CC or ClusterC) and Networking
    Cloud Computing (CC or CloudC) and Networking
    SuperComputing (SC or SupC) and Networking
    Data Center (DC)
    HardWare (HW)

    19:02 and 20:22 UTC+2
    Microsoft has to incorporate OpenAI

    But definitely not valued 80 billion U.S. Dollar, but more in the DeepMind range.

    With the latest version of our MultiModal Artificial Neural Network Model (MMANNM) the company OpenAI has no legal business model anymore, in fact only a purely connectionist respectively ANN-based, and text-based chatbot executed by an ordinary Command Line Interface (CLI), voice-based system, or in a web browser, or on an old computer system without a causal link to our Evolutionary operating system (Evoos) and our Ontologic System (OS) with its Ontoscope (Os) is legal, while the partnership of the companies Microsoft and OpenAI was never legal.

    See also the note OpenAI is just many things it should not be of the 6th of March 2024.

    We also would like to recall that the subsymbolic, connectionist, probabilistic, and statistic brute force approach (e.g. Machine Learning (ML) and Artificial Neural Network (ANN)) does not work. But we will not explain the reason publicly.


    25.May.2024

    00:00 UTC+2
    Autorité de la Concurrence is not quite right

    We quote a report, which is about the Directive on Copyright in the Digital Single Market, the market regulator of the French Republic (F.R.), and the company Alphabet (Google) and was publicized on the 20th of March 2024: "Google hit with $270M fine in France as authority finds news publishers' data was used for Gemini
    [...]
    The competition authority has found fault with Google for failing to notify news publishers of this GenAI use of their copyrighted content. This is in light of earlier commitments Google made which are aimed at ensuring it undertakes fair payment talks with publishers over reuse of their content.

    Copyright and competition wrongs
    In 2019, the European Union passed a pan-EU digital copyright reform that extended copyright protections to news headlines and snippets. News aggregators, such as Google News, Discover and the "Top Stories" feature box on search results pages, had previously scraped and displayed these news stories on their products without any financial compensation.
    Google originally sought to evade the law by switching off Google News in France. But the competition authority quickly stepped in - finding its unilateral action an abuse of a dominant market position that risked harm to publishers. The intervention essentially forced Google to cut deals with local publishers over content reuse. [...]
    The tech giant called the sanction "disproportionate" and said it would appeal. But it subsequently sought to settle the dispute - offering a series of pledges and withdrawing its appeal. The commitments were accepted by the French Autorité, include passing key information to publishers and negotiating in a fair way.
    Google has signed copyright agreements with hundreds of publishers in France - which fall under the remit of its agreement with the Autorité. So its business in this area is very tightly regulated.

    No appeal
    Google has agreed not to contest the Autorité's latest findings - in exchange for a fast-tracked process and making a monetary payment.
    [...]
    With generative AI in the frame, and the competitive scramble to launch tools, Google's calculus on approaching the content reuse issue looks different.

    GenAI training in the frame
    Today's enforcement by France's competition authority shows it honed in on Google's use of content from news publishers and agencies for training purposes for its AI foundation model [and "generative AI tool",] and its related AI chatbot service Bard (now called Gemini).
    It found Google used content from publishers and press agencies for training Bard [...] "without notifying the copyright holders or the Authority," per its press release.
    On this point, Google's defense is twofold. In its blog post it writes that the competition authority "does not challenge the way web content is used to improve newer products like generative AI, which is already addressed in Article 4 of the EUCD" [[European Union] Copyright Directive].
    However in its press release the Autorité argues it has not yet been determined whether the exemption applies here. (It's worth noting the relevant clause refers to "lawfully accessible works" - while Google is under a legally binding commitment to the competition authority to notify copyright holders about uses of their protected works and apparently failed to do so in this case.)
    "When it comes to declaring whether using news content to train an artificial intelligence service falls under neighboring rights and protection, this question has not been answered just yet," the competition authority wrote. "However, the Autorité considers that Google has breached its commitment #1 by failing to inform publishers that their content had been used to train Bard."
    Google's blog post also makes passing mention of the EU AI Act - suggesting it's of relevance. However the legislation is not yet in force as it's pending final adoption by the European Council.
    The incoming AI legislation will also say developers must abide by the bloc's copyright rules. And it introduces transparency requirements with that goal in mind - requiring them to put in place a policy to respect EU copyright law; and make publicly available a "sufficiently detailed summary" of the content used for training general purpose AI models (such as Gemini/Bard).
    This incoming requirement on model makers to publish a training data summary may, in the future, make it easier for news publishers whose protected content has been ingested for GenAI training to obtain fair remuneration under EU copyright law.

    No technical opt out
    The Autorité also points out that Google failed to provide, until at least September 28, 2023, a technical solution to allow publishers and press agencies to opt out of their content being used to train Bard without such a decision affecting the display of their content on other Google services.
    ""Until this date, publishers and news agencies that wanted to opt out of this use case had to insert an instruction that blocks all content indexation from Google, including for Search, Discover and Google News services. Those services are specifically part of the negotiation for revenue related to neighboring rights," it wrote, adding: "In the future, the Autorité will carefully look at the effectiveness of Google's opt-out processes."
    [...]

    Other shortcomings
    The Autorité is also sanctioning Google for a raft of other issues related to how it negotiates with French news publishers, finding it failed to provide them with all the information needed to ensure fair bargaining of remuneration for their content.
    In its press release, it wrote that Google's information to publishers about its methodology for calculating how much they should be paid was "particularly opaque."
    It also found Google failed to meet non-discrimination criteria, aimed at ensuring publishers get equal treatment. And it called out a decision by Google to impose a "minimum threshold" for remuneration - i.e. below which it would not make any pay-outs to publishers - with the Autorité describing this as introducing discrimination between publishers "in its very principle". Below a certain threshold all publishers are "arbitrarily allocated zero remuneration, regardless of their respective situation", its press release also noted.
    Additionally, the Autorité found fault with Google's calculations regarding so-called "indirect income", saying the "package" it proposed was not in accordance with previous decisions or the appeal judgment of the Court of Justice, from October 2020.
    It also said Google failed to act on its commitment to update remuneration contracts in line with its pledges."

    Comment
    We are observing the general development with great interest.

    According to an online encyclopedia, the "Article 4 introduces a copyright exception for text and data mining (TDM) for the purposes of scientific research."
    But Google uses its partial Ontologic roBot (OntoBot) variant Bard/Gemini for commercial purposes.
    Also note that the arts are not mentioned at all for very good reasons.

    The agreement between Google and the French Autorité is only about verbatim copies of news contents, but not about transformative and new expressions of ideas, for which content from publishers and press agencies can be used under the fair use doctrine of the copyright law.

    But Google could not appeal, because the company is not the holder of the copyright for the related essential parts of our Evoos, our coherent Ontologic Model (OM), our Ontologic roBot (OntoBot), and our transformative, generative, and creative Bionics, specifically our Foundation Model (FM), Foundational Model (FM), Capability and Operational Model (COM), and MultiModal Artificial Neural Network Model (MMANNM).

    We also repeat once again that C.S.

  • does not have to make publicly available a "sufficiently detailed summary" of the content used for training our general purpose AI models and our OntoBot,
  • does not have to make fair remuneration to news publishers,
  • does not have to provide a technical opt out, quite contrary, and
  • does not have to do this and that in relation to the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S.,

    and our Society for Ontological Performance and Reproduction (SOPR) will enforce all basic rights and exclusive rights and properties of C.S. and our corporation at the courts worldwide.

    For sure, publishers and copyright holders have the exclusive right to license their works without restrictions. But licensees are only allowed to use said licensed work for a

  • scientific research in general, and
  • purely connectionist respectively ANN-based, and text-based chatbot executed by an ordinary Command Line Interface (CLI), voice-based system, or in a web browser, or on an old computer system without a causal link to our Evolutionary operating system (Evoos) and our Ontologic System (OS) with its
    • Ontologic System Architecture (OSA) and
    • Ontologic Components (OSC), including
      • Ontologic roBot (OntoBot),
      • Ontologic Scope (OntoScope), and
      • Ontologic Serach (OntoSearch) and Ontologic Find (OntoFind),

      and

    • Ontoscope (Os)

    in particular integrated with the fields of

    • ontology,
    • formal modeling,
    • formal verification and validation,
    • Information Retrieval (IR) System (IRS), including Information Filtering (IF) System (IFS), including Recommendation System or Recommender System (RecS), and also Search System (SS) or Search Engine (SE), and Question Answering (QA) System (QAS),
    • Knowledge Representation and Reasoning (KRR), including Graph-Based Knowledge Base (GBKB) or Knowledge Graph (KG), Semantic Network (SN), Ontologic Model (OM), etc.,
    • Knowledge Retrieval (KR) System (KRS), including Semantic Search Engine (SSE) System (SSES),
    • Information System (IS), including Knowledge Management (KM) System (KMS),
    • Expert System (ES),
    • chatbot,
    • Computational Linguistics (CL), including Speech Processing (SP), including speech recognition and speech synthesis,
    • voice user interface,
    • Dialog System (DS or DiaS), including Dialogue Management System (DMS),
    • Conversational System (CS or ConS), including Conversational Agent System (CAS or ConAS),
    • multimodal system, including Multimodal User Interface (MUI),
    • Intelligent Agent System (IAS), including (voice-based or speech controlled) virtual assistant, Intelligent Personal Assistant (IPA),
    • Robotic System (RS),
    • and so on.

    But the exclusion doctrines of the copyright are effective as well, specifically in relation to a transformative and new expression of idea.
    Eventually, the goalposts are not being moved here, especially not because we are in the right.

    See also the notes

  • No monopoly on sounds or noises of the 22nd of May 2024,
  • Sony Music is wrong of the 23rd of May 2024 (yesterday),

    and the other publications cited therein.


    27.May.2024

    07:32, 12:05, and 16:57 UTC+2
    Luka blacklisted

    We quote an online encyclopedia about the subject Replika of the company Luka: "Replika is a generative AI chatbot app released in November 2017.[1 [The Emotional Chatbots Are Here to Probe Our Feelings. [31st of January 2018]]] The chatbot is trained by having the user answer a series of questions to create a specific neural network.[2 [This app is trying to replicate you. [29th of August 2019]]] The chatbot operates on a freemium pricing strategy, with roughly 25% of its user base paying an annual subscription fee.[1]
    [...]

    History
    Eugenia Kuyda established Replika while working at Luka, a tech company she had co-founded at the startup accelerator Y Combinator around 2012.[3][4] Luka's primary product was a chatbot that made restaurant recommendations.[3] According to Kuyda's origin story for Replika, a friend of hers died in 2015 and she converted that person's text messages into a chatbot.[5] According to Kuyda's story, that chatbot helped her remember the conversations that they had together, and eventually became Replika.[3]
    Replika became available to the public in November 2017.[1] [...]
    [...]"

    Comment
    The application is based on

  • Information Retrieval (IR) System (IRS), including Information Filtering (IF) System (IFS), including Recommendation System or Recommender System (RecS), and also Search System (SS) or Search Engine (SE), and Question Answering (QA) System (QAS),
  • chatbot,
  • speech recognition,
  • voice user interface,
  • Dialog System (DS or DiaS), including Dialogue Management System (DMS),
  • Conversational System (CS or ConS), including Conversational Agent System (CAS or ConAS),
  • multimodal system, including Multimodal User Interface (MUI),
  • Multimedia User Interface (MUI),
  • "3D computer graphics software tool set used for creating animated films, visual effects, art, 3D-printed models, motion graphics, interactive 3D applications, virtual reality, and [...] video games [...] features include 3D modelling", [Blender],
  • avatar, including 3D characters, anime characters, minifigures, etc.,
  • anime-based Graphical User Interface (GUI),
  • Intelligent Agent System (IAS), including (voice-based or speech controlled) virtual assistant, Intelligent Personal Assistant (IPA),
  • Cognition and Affect (CogAff) Architecture (CogAffA) for Cognitive and Affective Computing (CogAffC) (e.g. Emotive Computing (EmoC) and Affective Computing (AffC)), and
  • parallel analogy (panalogy) architecture (e.g. Emotion Machine architecture),
  • coherent Ontologic Model (OM),
  • transformative, generative, and creative Bionics,
  • cybernetic self-reflection, self-portrait, or self-image, and
  • user reflection and learning the habits of a user,

    and therefore a partial plagiarism and fake of our Ontologic roBot (OntoBot), obviously, and thus a copyright infringement due to our expression of idea, compilation, integration, architecture, etc., and also exclusive moral rights respectively Lanham (Trademark) rights (e.g. exploitation (e.g. commericalization (e.g. monetization))). What else?

    The connection to the start-up accelerator Y Combinator is no surprise for us anymore.

    Finally, we note once again that morality is a foreign word in this company and in the state of California.

    17:18 UTC+2
    xAI has no rights and properties, and is blacklisted

    Do not fall prey to that ponzi once again. As in the cases of the companies Microsoft and OpenAI, Alphabet (Google), Amazon and Anthropic, Meta (Facebook), Nvidia, Alibaba, Tencent, and Co., the company X.AI Corporation has no moral rights respectively Lanham (Trademark) rights and no copyrights at all, specifically for

  • our expressions of ideas, compilations, integrations, architectures, components, etc. in general and
  • our coherent Ontologic Model (OM), our Ontologic roBot (OntoBot), and our transformative, generative, and creative Bionics, specifically our Foundation Model (FM), Foundational Model (FM), Capability and Operational Model (COM), and MultiModal Artificial Neural Network Model (MMANNM), our Ontologic Net (ON) with our Interconnected supercomputer (Intersup), our Ontoverse (Ov) and New Reality (NR), and so on in particular,

    in total contrast to C.S. and our corporation with our collecting Society for Ontological Performance and Reproduction (SOPR) with its exlusive infrastructure.

    But the fact is that the latest variant of the partial plagiarism and fake of our OntoBot is also already able to process textual and visual informations and therefore multimodal and no ordinary text-based chatbot anymore, even if both capabilities have been presented as separate development stages.

    Eventually, bad actors and their supporters will bite on granite and potentially end in jail for a very long term. In fact, we already mentioned that this attempt to steal our OntoBot and other essential parts of our Evoos andour OS could be the final move of several bad actors.
    In this case of xAI, we do see once again wire fraud, and conspiracy and plot in general and another hidden Ponzi scheme in particular, as well as infringements of the rights and properties of C.S. and our corporation.

    See the related messages, notes, explanations, clarifications, investigations, and claims, specifically

  • There is only one OS and Ov #3 of the 9th of May 2024,
  • Clarification #2 of the 21st of May 2024,

    and the other publications cited therein.s

    By the way:

  • That fraudulent and even serious criminal company is only valued x billion of U.S. Dollars on the basis of already made investments. But it does not have this sum as working capital and we guess that the second financing round is only sufficient to finance its operation, specifically the training of ANN and the conduction of fraudulent and even serious criminal actions, for the next view months, as can be easily seen with similar fraudulent and even serious criminal activities in this field.
    But they all need so much more capital in the range of several hundred billion or even a few trillion U.S. Dollar, for sure without having serious competition, while the first better known start-ups have already given up, sold a lot of shares risking control over their businesses to avoid insolvency, and companies like Meta (Facebook) have to calculate twice about the costs and the existential legal consequences.
    And they all have to comply with the
    • national and international laws, regulations, and acts, as well as agreements, conventions, and charters,
    • rights and properties of C.S. and our corporation, and
    • Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) Terms of Service (ToS) with the License Model (LM) and Main Contract Model (MCM) of our Society for Ontological Performance and Reproduction (SOPR),

    and utilize the exclusive infrastructures of our SOPR and our other Societies with their set of foundational and essential facilities, technologies, goods, and services.

  • Investors of Tesla Motors, Space Exploration Technologies (Space ExiT), and X (Twitter) got just another reason to be more concerned about the focus and distraction of those companies and their investments in them.


    28.May.2024

    18:39 and 20:42 UTC+2
    Amazon has to incorporate Anthropic

    But definitely not valued 15 billion U.S. Dollar, but more in the DeepMind range.

    Maybe, Amazon does it together with Alphabet (Google), for example as a 50:50 Joint Venture (JV).


    29.May.2024

    04:04 and 20:24 UTC+2
    Clarification

    *** Work in progress - some few comments missing, better order and wording ***
    In the last few years, we already noted that at least 3 main directions of development are taking place in relation to the implementation of more essential parts of our Ontologic System (OS) with its inclusion of our Evolutionary operating system (Evoos) with its coherent Ontologic Model (OM) and its Ontologic roBot (OntoBot), which are the fields of

  • multimodal system,
  • integrated (unified and hybrid) system on the basis of
    • subsymbolic, connectionist, probabilistic, and statistic models (e.g. Machine Learning (ML)), including
      • Artificial Neural Network (ANN),
      • Support Vector Machine (SVM),
      • Markov Model (HM) (e.g. Hidden Markov Model (HMM), Variable-Order Markov Model (VOMM), Markov Chain (MC), and Markov Random Field (MRF)),
      • Probabilistic Graphical Model (PGM) (e.g. Bayesian Network (BN) and MRF),
      • etc.,

      and

    • symbolic, logic, mathematic, and structured or formal models (e.g. Artificial Intelligence (AI)), including
      • Two-Valued Logics (TVL),
      • Many-Valued Logics (MVL),
      • Finite-Valued Logics (FVL),
      • Infinite-Valued Logics (IVL) or Continuous-Valued Logics (CVL),
      • Fuzzy Logic (FL),
      • etc.,

    and also

  • SoftWare Agent-Based System (SWABS) or simply Agent-Based System (ABS) or Agent System (AS or ASys) respectively Agent-Oriented technologies (AOx), including
    • SoftWare Robotic System (SWRS, SoftWare robot, SoftWare bot, or Softbot),
    • Distributed Artificial Intelligence (DAI), Multi-Agent and Cooperative Computing (MACC), or Modeling Autonomous Agents in a Multi-Agent World (MAAMAW), including
      • Multi-Agent System (MAS),
      • MAS agency or Agent Society (AS or ASoc), and
      • Distributed Problem Solving (DPS),
    • Holonic Agent System (HAS),
    • Intelligent Agent System (IAS), and
    • Cognitive Agent System (CAS), and also
    • Agent-Oriented Programming (AOP),

    which is particularly apparent with for example the integrations of

    • subsymbolic and symbolic systems by the
      • unified approach, or connectionist symbol processing system, or subsymbolic Artificial Intelligence (AI) system, or parallel distributed processing system, or distributed connectionist model, or Distributed Artificial Neural Network (DANN) model, or integrated connectionist model, and
      • hybrid approach, or Hybrid Symbolic-Connectionist (HSC) or Hybrid Connectionist-Symbolic (HCS) system (not to be confused with purely connectionist integrations of the unified approach),

      and

    • unified and hybrid subsymbolic and symbolic systems with Agent-Based Systems (ABSs) by
      • Hybrid Symbolic-Connectionist (HSC) system based on Multi-Agent System (MAS) in 1993,
      • Soft Computing Agent Society (SCAS) in the beginning of 1999, and
      • Evoos and Soft Agent Computing (SAC) around the end of 1999 and in the beginning of 2000,

      (see also the fields of Computational Intelligence (CI), including the fields of

    • Soft Computing (SC or SoftC), including the fields of
      • Fuzzy Logic (FL), later whole Multi-Valued Logics (MVL) (by Evoos),
      • Artificial Neural Network (ANN), later Reflective Neural Network (RNN) or Feedback Neural Network (FNN) (by Evoos),
      • Probabilistic Reasoning (PR), later whole Probabilistic Models (PMs or ProMs),
      • Bayesian Network or Belief Network (BN or BNet), and
      • Genetic Algorithm (GA), later whole Evolutionary Computing (EC) (by Evoos),

      and

    • Swarm Computing (SC or SwarmC) or Swarm Intelligence (SI)),

    and the connections and integrations of these fields with the fields of

  • Quality Management (QM), and iterative and incremental optimization, including Total Quality Management (TQM), Plan-Do-Check-Act (PDCA) process and PDCA multi-loop, etc.,

  • formal system, including formal language, formal grammar, syntax (logic), and logical form, Logic System (LS), etc.,
  • formal modeling,
  • formal verification and validation,

  • reflective system,
  • Ontology-Based technologies (OBx),
  • Ontology-based Object-oriented (OO 1), Ontology-Oriented (OO 2), Ontologic(-Oriented) (OO 3) system,
  • Model-Based technologies (MBx),
  • model-based reflection (e.g. Arrow System (AS)),

  • Information Retrieval (IR) System (IRS), including Information Filtering (IF) System (IFS), including Recommendation System or Recommender System (RecS), and also SSearch System (SS) or Search Engine (SE), and Question Answering (QA) System (QAS),
  • Knowledge Representation and Reasoning (KRR), including Graph-Based Knowledge Base (GBKB) or Knowledge Graph (KG), Ontologic Model (OM), Semantic Network (SN), etc.,
  • Knowledge Retrieval (KR) System (KRS), including Semantic Search Engine (SSE) System (SSES),
  • Information System (IS), including Knowledge Management (KM) System (KMS),
  • Expert System (ES),
  • chatbot,

  • speech recognition,
  • voice user interface,
  • Dialog System (DS or DiaS), including Dialogue Management System (DMS),
  • Conversational System (CS or ConS), including Conversational Agent System (CAS or ConAS),

  • Intelligent Agent System (IAS), including (voice-based or speech controlled) virtual assistant, Intelligent Personal Assistant (IPA) or Personal Intelligent Assistant (PIA),

  • Common Sense Computing (CSC),

  • Autonomic technologies (Ax) (e.g. Autonomic Computing (AC)),
  • Resource-Oriented technologies (ROx) (e.g. Resource-Oriented Computing (ROC)),
  • Autonomous System (AS),
  • Robotic Automation technologies (RAx),
  • Service-Oriented technologies (SOx),
  • Robotic System (RS),
  • and so on.

    In the past, we have already discussed the fields of

  • capability, operation, and control,
  • multidomain system,
  • multilingual system,
  • multiparadigmatic system,
  • multimodal system,
  • multimedia system,
  • integrated system,
  • Information Retrieval (IR) System (IRS),
  • Knowledge Representation and Reasoning (KRR), and
  • Knowledge Retrieval (KR) System (KRS)

    multiple times and showed that only the

  • text-based chatbot for a single Domain-Specific Language (DSL), and
  • rudimentary Agent-Based System (ABS)

    are no plagiarisms and fakes of our Evoos with its coherent Ontologic Model (OM) and transformative, generative, and creative Bionics, and our OS with our Ontologic roBot (OntoBot).

    In this clarification we discuss the integration of the fields of logics respectively symbolic systems and connectionist systems respectively subsymbolic systems in (a little) more detail.

    We quote an online encyclopedia about the subject hallucination: "In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called confabulation[1] or delusion[2]) is a response generated by AI which contains false or misleading information presented as fact.[3][4 [On Faithfulness and Factuality in Abstractive Summarization. [2020]]][5] [...]
    For example, a chatbot powered by large language models (LLMs) [...] may embed plausible-sounding random falsehoods within its generated content. Researchers have recognized this issue, and by 2023, analysts estimated that chatbots hallucinate as much as 27% of the time, with factual errors present in 46% of their responses. Detecting and mitigating these hallucinations pose significant challenges for practical deployment and reliability of LLMs in real-world scenarios.[6][7][8] [...]

    [...]

    Mitigation methods
    The hallucination phenomenon is still not completely understood.[5] Therefore, there is still ongoing research to try to mitigate its occurrence.[52 [A Simple Recipe towards Reducing Hallucination in Neural Surface Realisation. [July 2019]]] Particularly, it was shown that language models not only hallucinate but also amplify hallucinations, even for those which were designed to alleviate this issue.[53 [On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models? [July 2022]]] Researchers have proposed a variety of mitigation measures, including getting different chatbots to debate one another until they reach consensus on an answer.[54 [ChatGPT 'hallucinates.' Some researchers worry it isn't fixable. [30th of May 2023]]] Another approach proposes to actively validate the correctness corresponding to the low-confidence generation of the model using web search results.[55 [A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation. [12th of August 2023]]] Nvidia Guardrails, launched in 2023, can be configured to block LLM responses that do not pass fact-checking from a second LLM.[56 [Nvidia has a new way to prevent A.I. chatbots from 'hallucinating' wrong facts. [25th of April 2023]]] Furthermore, numerous tools like SelfCheckGPT[57] and Aimon[58] have emerged to aid in the detection of hallucination in offline experimentation and real-time production scenarios."

    Comment
    First of all, we note that it is both, the datasets and the models.
    We also note that hallucination is fixable, at least in theory (see also for example the document titled "Symbol Processing Systems, Connectionist Networks, and Generalized Connectionist Networks" and publicized in December 1990), but only with extensive work and as integrated systems, like the ones discussed in this clarification (see for example the document titled "Integrated Connectionist Models: Building AI Systems on Subsymbolic Foundations" and publicized in October 1994), because the problem is system immanent respectively inherent in subsymbolic systems and connectionist networks (e.g. Artificial Neural Network (ANN)).

    To increase factuality, and precision or accuracy, and also actuality of recall and retrieval, and to detect and mitigate hallucination one can also see once again the

  • utilization of integrated systems, specifically unified and hybrid subsymbolic and symbolic systems:
    • Neural logic or Neural symbolic system, utilization of ANNs as tools in logic models,
    • Logic neural or symbolic neural network, rationalization of conventional ANN models, and
    • Logic-neural hybrid, incorporation of logic technologies and ANNs into hybrid systems, and also
    • SoftWare Agent-Based System (SWABS),

    as already mentioned in the introductory section and also the

  • utilization of and the integration with
    • reflective system,
    • Information Retrieval (IR) by the
      • unified approach, or direct integration with an SES, or
      • hybrid approach, or indirect integration with (a form of) Information Retrieval (IR), such as for example the capability to browse or run a web search on the same subject and then fold the one or more search results into the query before sending it on to a SoftWare Agent-Based System (SWABS), including SoftWare Robotic System (SWRS, SoftWare robot, SoftWare bot, or Softbot) and chatbot.
    • KR

    For example, the approaches of the

  • document titled "Harnessing Deep Neural Networks with Logic Rules" and publicized in March 2019 and
  • Guardrails of the company Nvidia presented in April 2023

    are nothing else than somekind of reflection and Multi-Agent System (MAS).
    See also the predictive coding theory of brain function (also known as predictive processing) and the field of Cognitive Agent System (CAS).

    To increase rationality, explainability, and interpretability, and also validity and verifiability one can also see once again the same utilization of integrated systems, specifically with the logical calculi

  • MVL,
  • FL,
  • etc..

    See also the related and more detailed quotes and comments about the

  • document titled "A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation" and referenced in the text about the subject hallucination quoted above, and also based on our OS, and
  • subjects
    • eXplainable Artificial Intelligence (XAI) or eXplainable Machine Learning (XML), and
    • Reason Maintenance System (RMS), including Truth Maintenance System (TMS),

    quoted below.

    As we always mention at this point, the deficits of the brute force approach are also the reason why we

  • created our OS on the one hand and
  • warned about the brute force approach all the time on the other hand.

    "This is the result of the brute force approach, which is the reason why we prefer a hybrid holonic Immobot architecture with subsymbolic and symbolic foundations and rationality" (see the Investigations::Multimedia, AI and KM of the 5th of August 2022), because "this formal approach outsmarts and outperforms the probabilistic chaos inherent in the brute force approach once again (see the Clarification of the 22nd of July 2023)."
    See also the notes

  • They lost control of the 21st of July 2023 and
  • Success story continues and no end in sight of the 7th of August 2023.

    Eventually, the causal links with our Evoos and our OS are obvious.

    We quote an online encyclopedia about the subject conceptualization in the field of Information Science (IS): "In information science a conceptualization is an abstract simplified view of some selected part of the world, containing the objects, concepts, and other entities that are presumed of interest for some particular purpose and the relationships between them.[2][3] An explicit specification of a conceptualization is an ontology, and it may occur that a conceptualization can be realized by several distinct ontologies.[2] An ontological commitment in describing ontological comparisons is taken to refer to that subset of elements of an ontology shared with all the others.[4][5] "An ontology is language-dependent", its objects and interrelations described within the language it uses, while a conceptualization is always the same, more general, its concepts existing "independently of the language used to describe it".[6 [Formal Ontology in Information Systems. [1998]]] [...]
    Not all workers in knowledge engineering use the term ‘conceptualization', but instead refer to the conceptualization itself, or to the ontological commitment of all its realizations, as an overarching ontology.[7]"]

    Comment
    If only a single ontology exists as explicit specification of a conceptualization, then this ontology is language-independent.

    We quote an online encyclopedia about the subject ontology learning: "Ontology learning (ontology extraction, ontology generation, or ontology acquisition) is the automatic or semi-automatic creation of ontologies, including extracting the corresponding domain's terms and the relationships between the concepts that these terms represent from a corpus of natural language text, and encoding them with an ontology language for easy retrieval. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process.
    Typically, the process starts by extracting terms and concepts or noun phrases from plain text using linguistic processors such as part-of-speech tagging and phrase chunking. Then statistical[1] or symbolic[2][3] techniques are used to extract relation signatures, often based on pattern-based[4] or definition-based[5] hypernym extraction techniques.

    Procedure
    Ontology learning (OL) is used to (semi-)automatically extract whole ontologies from natural language text.[6][7] [...]"

    Comment
    object, concept, entity, domain term, language, ontology

    We quote the document titled "A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation" and publicized on the 12th of August 2023: "[...]
    1 Introduction
    Recently developed large language models [...] have achieved remarkable performance on a wide range of language understanding tasks. Furthermore, they have been shown to possess an impressive ability to generate fluent and coherent text. Despite all these abilities, their tendency to ‘hallucinate' critically hampers their reliability and limits their widespread adoption in real-world applications.
    Hallucination in the context of language models refers to the generation of text or responses that seem syntactically sound, fluent, and natural but are factually incorrect, nonsensical, or unfaithful to the provided source input [...].
    [...] Specifically, the detection technique achieves a recall of ~ 88% and the mitigation technique successfully mitigates 57.6% of the correctly detected hallucinations. Importantly, our mitigation technique does not introduce new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives. Then, we show that the proposed active detection and mitigation approach successfully reduces the hallucinations of the GPT-3.5 (textdavinci-003) model from 47.5% to 14.5% on average (Figure 1).
    [...]

    2.2 Hallucination Detection
    2.2.1 Identify Key Concepts
    In the first step, we identify the important concepts from the generated sentence. We identify these concepts because validating the correctness of the entire sentence at once is infeasible; this is because a sentence may contain a number of different facets all of which can not be validated at once. On the other hand, individually validating the correctness corresponding to the concepts provides opportunities for accurately detecting hallucinations. Thus, the objective of this step is to identify the candidates of potential hallucination. We note that a concept or keyphrase is essentially a span of text consisting of one or more words. We study the following techniques to identify the concepts:
    Entity Extraction: Entities are usually an important part of a sentence, thus, we use an off-the-shelf entity extraction model to identify the concepts. A limitation of this method is that a concept need not necessarily be an entity and can be a non-entity span also. We address this limitation with a keyword extraction model.
    Keyword Extraction: To also identify the non-entity concepts, we explore an off-the-shelf keyword extraction model¹. This model uses Keyphrase Boundary Infilling with Replacement (KBIR) as its base model and fine-tunes it on the KPCrowd dataset [...].
    *Instructing the Model*: Since state-of-the-art language models perform remarkably well on a wide range of tasks, in this technique, we directly instruct the model to identify the important concepts from the generated sentence. An important characteristic of this technique is that it doesn't require calling a task-specific tool (entity or keyword extraction model) for this task.
    Table 7 (in Appendix A.1) illustrates examples of concepts identified using the three techniques. It shows that the entity extraction model misses many important concepts while the keyword extraction model identifies a lot of insignificant concepts also. In contrast, instruction technique successfully identifies all the important concepts. [...]
    [...]

    2.2.3 Create Validation Question
    We start the validation procedure for a concept by creating a question that tests the correctness of the information (in the generated sentence) pertaining to the concept. [...] For creating these questions, we explore the following two techniques:
    Question Generation Tool: Here, we use an offthe-shelf answer-aware question generation model.
    *Instructing the Model*: Here, we directly instruct the model to create a validation question checking the correctness of the information about the selected concept. [...]
    [...]

    2.2.4 Find Relevant Knowledge
    *Web Search*: In order to answer the validation question, we retrieve knowledge relevant to it which serves as additional context. For generality and wide coverage, we use web search (via Bing search API) for retrieving this knowledge. However, we note that any other search API or knowledge corpus can also be utilized for this purpose.
    Self-Inquiry: We also explore a self-inquiry technique where we directly prompt the model to answer the validation question. In this technique, the model relies on its parametric knowledge to answer the validation question. This technique has several drawbacks as compared to web search such as lack of a reliable strategy to extract the parametric knowledge from the model and staleness of the parametric knowledge.
    Note that the proposed knowledge retrieval step in our approach has several benefits, such as (a) it does not retrieve knowledge when it is not required, i.e., when the model is already sufficiently confident (since we show it is less likely to hallucinate in such scenarios), (b) it individually retrieves knowledge pertinent to the concept(s) on which the calculated probability score is low thus providing it sufficient and relevant context for accurate validation / mitigation.

    2.2.5 Answer Validation Question
    In this step, we prompt the model to answer the validation question (leveraging the retrieved knowledge as context) and verify its response. [...]
    [...]

    Hallucination Mitigation
    For mitigating the hallucination in the generated sentence, we instruct the model to repair the generated sentence by either removing or substituting the hallucinated information using the retrieved knowledge as evidence. [...]
    [...]

    2.4 Design Decisions
    [...]

    Why validation is done using the web search?
    Our preferred technique for retrieving knowledge is web search because the web is more likely to contain the updated knowledge in comparison to a knowledge corpus whose information can become stale, outdated, and obsolete.

    [Image caption:] Figure 3: Distribution of instances across different domains in our topic set.

    [...]"

    Comment
    This document is interesting and revealing.

    Maybe some of our fans and readers have also spotted the elaborated circumscriptions of (somekind of an)

  • ontology (learning) and
  • evolutionary Generate and Test (G'n'T) validation and verification approach in relation to the Stable Model Semantics (SMS), Well Founded Semantics (WFS), and Answer Set Semantics (ASS), which we have integrated with our coherent Ontologic Model (OM) (see also the Clarification #1 of the 14th of June 2016),

    to unsuccessfully avoid a causal link with our OS and also all the rest of the fields of

  • Common Sense Computing (CSC),
  • validation and verification,
  • Information Retrieval (IR) System (IRS), including
    • Search System (SS) or Search Engine (SE), including
      • web search,
    • etc.,
  • Knowledge Representation and Reasoning (KRR), including
    • Semantic Net (SN),
    • Graph-Based Knowledge Base (GBKB) or Knowledge Graph (KG),
    • etc.,
  • Knowledge Retrieval (KR) System (KRS), including
    • Semantic Search Engine (SSE),
  • etc..

    From the legal point of view, the copyright infringement is obvious.

    From the technical point of view we note that the quoted partial plagiarism and fake of our OS only cures the symptom, but does not eleminate the cause, "because eventually it is only applying the brute force approach and therefore still making the old mistake of the field of Artificial Intelligence (AI)" (see also the Clarification of the 1st of February 2023).

    Honestly, we are not quite sure how the authors get the reduction down to 14.5%.
    100 responses, 47.5% or 47.5 are hallucinations, 88% or 41.8 potential hallucinations detected, 57.6% or 24.07 hallucinations mitigated, 23.43% or 23.43 hallucinations remaining and 3.x%(?) facts wrongly removed

    See also related section of the note % + %, % OAOS, % HW #11 of the 24th of May 2024, which lists the related activities as well.

    We quote an online encyclopedia about the subject eXplainable Artificial Intelligence (XAI) or interpretable AI, or eXplainable Machine Learning (XML): "Explainable AI (XAI), often overlapping with Interpretable AI, or Explainable Machine Learning (XML), either refers to an artificial intelligence (AI) system over which it is possible for humans to retain intellectual oversight, or refers to the methods to achieve this.[1][2] The main focus is usually on the reasoning behind the decisions or predictions made by the AI[3] which are made more understandable and transparent.[4] XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.[5][6]
    XAI hopes to help users of AI-powered systems perform more effectively by improving their understanding of how those systems reason.[7] XAI may be an implementation of the social right to explanation.[8] Even if there is no such legal right or regulatory requirement, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. XAI aims to explain what has been done, what is being done, and what will be done next, and to unveil which information these actions are based on.[9] This makes it possible to confirm existing knowledge, challenge existing knowledge, and generate new assumptions.[10]
    Machine learning (ML) algorithms used in AI can be categorized as white-box or black-box.[11] White-box models provide results that are understandable to experts in the domain. Black-box models, on the other hand, are extremely hard to explain and may not be understood even by domain experts.[12] XAI algorithms follow the three principles of [(list points added)]

  • transparency,
  • interpretability, and
  • explainability.

    A model is

  • transparent "if the processes that extract model parameters from training data and generate labels from testing data can be described and motivated by the approach designer."[13]
  • Interpretability describes the possibility of comprehending the ML model and presenting the underlying basis for decision-making in a way that is understandable to humans.[14][15 [The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery. [June 2018]]][16]
  • Explainability is a concept that is recognized as important, but a consensus definition is not yet available;[13] [...]

    [...]
    Sometimes it is also possible to achieve a high-accuracy result with white-box ML algorithms. These algorithms have an interpretable structure that can be used to explain predictions.[19 [Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. [2019]]] [...]
    [...]

    Goals
    Cooperation between agents - in this case, algorithms and humans - depends on trust. If humans are to accept algorithmic prescriptions, they need to trust them. Incompleteness in formal trust criteria is a barrier to optimization. Transparency, interpretability, and explainability are intermediate goals on the road to these more comprehensive trust criteria.[26] This is particularly relevant in medicine,[27] especially with clinical decision support systems (CDSS), in which medical professionals should be able to understand how and why a machine-based decision was made in order to trust the decision and augment their decision-making process.[28]
    AI systems sometimes learn undesirable tricks that do an optimal job of satisfying explicit pre-programmed goals on the training data but do not reflect the more nuanced implicit desires of the human system designers or the full complexity of the domain data. [...]
    One transparency project, the DARPA XAI program, aims to produce "glass box" models that are explainable to a "human-in-the-loop" without greatly sacrificing AI performance. Human users of such a system can understand the AI's cognition (both in real-time and after the fact) and can determine whether to trust the AI.[31] Other applications of XAI are knowledge extraction from black-box models and model comparisons.[32] [...] The term "glass box" is often used in contrast to "black box" systems, which lack transparency and can be more difficult to monitor and regulate.[33] The term is also used to name a voice assistant that produces counterfactual statements as explanations.[34]

    Explainability versus interpretability
    There is a difference between the terms explainability and interpretability in the context of AI.[35]
    [Table:]
    Interpretability "level of understanding how the underlying (AI) technology works" [...][36]
    Explainability "level of understanding how the AI-based system ... came up with a given result" [...][36]

    History and methods
    During the 1970s to 1990s, symbolic reasoning systems, such as MYCIN,[37] GUIDON,[38] SOPHIE,[39] and PROTOS[40][41] could represent, reason about, and explain their reasoning for diagnostic, instructional, or machine-learning (explanation-based learning) purposes. [...] Symbolic approaches to machine learning relying on explanation-based learning, such as PROTOS, made use of explicit representations of explanations expressed in a dedicated explanation language, both to explain their actions and to acquire new knowledge.[41]
    In the 1980s through the early 1990s, truth maintenance systems (TMS) extended the capabilities of causal-reasoning, rule-based, and logic-based inference systems.[43]: 360-362  A TMS explicitly tracks alternate lines of reasoning, justifications for conclusions, and lines of reasoning that lead to contradictions, allowing future reasoning to avoid these dead ends. To provide an explanation, they trace reasoning from conclusions to assumptions through rule operations or logical inferences, allowing explanations to be generated from the reasoning traces. As an example, consider a rule-based problem solver with just a few rules about Socrates that concludes he has died from poison: [...]
    By the 1990s researchers began studying whether it is possible to meaningfully extract the non-hand-coded rules being generated by opaque trained neural networks.[45] Researchers in clinical expert systems creating[clarification needed] neural network-powered decision support for clinicians sought to develop dynamic explanations that allow these technologies to be more trusted and trustworthy in practice.[8] [...]
    Marvin Minsky et al. raised the issue that AI can function as a form of surveillance, with the biases inherent in surveillance, suggesting HI (Humanistic Intelligence) as a way to create a more fair and balanced "human-in-the-loop" AI.[47 [The Society of Intelligent Veillance. [27-29th of June 2013]]]
    Modern complex AI techniques, such as deep learning and genetic algorithms, are naturally opaque.[48] To address this issue, methods have been developed to make new models more explainable and interpretable.[49 [Interpretable neural networks based on continuous-valued logic and multicriteria decision operators. [8th of July 2020]]][15][14][50][51][52 [Explainable Neural Networks Based on Fuzzy Logic and Multi-criteria Decision Tools. [2021]]] [...]
    There has been work on making glass-box models which are more transparent to inspection.[19][63] [...]
    Some techniques allow visualisations of the inputs to which individual software neurons respond to most strongly. Several groups found that neurons can be aggregated into circuits that perform human-comprehensible functions, some of which reliably arise across different networks trained independently.[69][70]
    There are various techniques to extract compressed representations of the features of given inputs, which can then be analysed by standard clustering techniques. Alternatively, networks can be trained to output linguistic explanations of their behaviour, which are then directly human-interpretable.[71] Model behaviour can also be explained with reference to training data-for example, by evaluating which training inputs influenced a given behaviour the most.[72]

    [...]

    Limitations
    Despite ongoing endeavors to enhance the explainability of AI models, they persist with several inherent limitations.
    Adversarial parties
    [...]
    Technical complexity
    [...]
    Understanding versus trust
    The goal of explainability to end users of AI systems is to increase trust in the systems, even "address concerns about lack of ‘fairness' and discriminatory effects".[77] However, even with a good understanding of an AI system, end users may not necessarily trust the system.[79] [...]
    [...]

    Criticism
    Some scholars have suggested that explainability in AI should be considered a goal secondary to AI effectiveness, and that encouraging the exclusive development of XAI may limit the functionality of AI more broadly.[82][83] Critiques of XAI rely on developed concepts of mechanistic and empiric reasoning from evidence-based medicine to suggest that AI technologies can be clinically validated even when their function cannot be understood by their operators.[82]
    Some researchers advocate the use of inherently interpretable machine learning models, rather than using post-hoc explanations in which a second model is created to explain the first. This is partly because post-hoc models increase the complexity in a decision pathway and partly because it is often unclear how faithfully a post-hoc explanation can mimic the computations of an entirely separate model.[19] However, another view is that what is important is that the explanation accomplishes the given task at hand, and whether it is pre or post-hoc doesn't matter. If a post-hoc explanation method helps a doctor diagnose cancer better, it is of secondary importance whether it is a correct/incorrect explanation.
    The goals of XAI amount to a form of lossy compression that will become less effective as AI models grow in their number of parameters. Along with other factors this leads to a theoretical limit for explainability.[84 [Is explainable AI a race against model complexity? [2022]]]

    [...]"

    Comment
    We have once again the impression that not only the authors of the quoted webpage have taken our OS as source of inspiration and blueprint, but also the authors of some of the works referenced in this webpage, and some of the works referenced in the referenced works, as one can see by the following exemplary works:
    Its list of references includes a lot of interesting works, including

  • Fisher, M., Balunovic, M., Drachsler-Cohen, D., Gehr, T., Zhang, C., and Vechev, M.: DL2: Training and Querying Neural Networks with Logic. 2019.
  • Franca, M. V., Zaverucha, G., and Garcez, A. S. d.: [CILP++:] Fast relational learning using bottom clause propositionalization with artificial neural networks. 2014.
  • Garcez, A. S. d., Broda, K., and Gabbay, D. M.: Neural-symbolic learning systems: foundations and applications. 2012.
  • Hu, Z., Ma, X., Liu, Z., Hovy, E., and Xing, E. P.: Harnessing Deep Neural Networks with Logic Rules. 2019.
  • Lin, C. T., Lee, C.S.G.: Neural Fuzzy Systems: A Neuro-Fuzzy Synergism to Intelligent Systems. 1996.
  • Towell, G. G., Shavlik, J. W., and Noordewier, M. O.: [KBANN:] Refinement of approximate domain theories by knowledge-based neural networks. 1990.
  • Xu, J., Zhang, Z., Friedman, T., Liang, Y., and den Broeck, G. V.: A semantic loss function for deep learning with symbolic knowledge. 2018.

    To increase rationality, explainability and interpretability, and also validity and verifiability ...

    Obviously, we have created something new and restarted something again in this specific field as well. And our Evoos and our OS have integrated Neuro-Fuzzy and KBANN as well, and also created the integration of ANN with for example Prolog (e.g. logic rules) and Probabilistic Graphical Model (PGM) or structured probabilistic model graph as part of AI 3.
    It should be easy to differentiate the prior art and our work of art from the plagiarists.

    We quote an online encyclopedia about the subject Reason Maintenance System (RMS), including Truth Maintenance System (TMS): "Reason maintenance[1 [The ins and outs of reason maintenance. [1983]]]][2 [Truth maintenance systems for problem solving. [1978]]] is a knowledge representation approach to efficient handling of inferred information that is explicitly stored. Reason maintenance distinguishes between base facts, which can be defeated, and derived facts. As such it differs from belief revision which, in its basic form, assumes that all facts are equally important. Reason maintenance was originally developed as a technique for implementing problem solvers.[2] It encompasses a variety of techniques that share a common architecture:[3] two components-a reasoner and a reason maintenance system-communicate with each other via an interface. The reasoner uses the reason maintenance system to record its inferences and justifications of ("reasons" for) the inferences. The reasoner also informs the reason maintenance system which are the currently valid base facts (assumptions). The reason maintenance system uses the information to compute the truth value of the stored derived facts and to restore consistency if an inconsistency is derived.
    A truth maintenance system, or TMS, is a knowledge representation method for representing both beliefs and their dependencies and an algorithm called the "truth maintenance algorithm" that manipulates and maintains the dependencies. The name truth maintenance is due to the ability of these systems to restore consistency.
    A truth maintenance system maintains consistency between old believed knowledge and current believed knowledge in the knowledge base (KB) through revision. If the current believed statements contradict the knowledge in the KB, then the KB is updated with the new knowledge. It may happen that the same data will again be believed, and the previous knowledge will be required in the KB. If the previous data are not present, but may be required for new inference. But if the previous knowledge was in the KB, then no retracing of the same knowledge is needed. The use of TMS avoids such retracing; it keeps track of the contradictory data with the help of a dependency record. This record reflects the retractions and additions which makes the inference engine (IE) aware of its current belief set.
    Each statement having at least one valid justification is made a part of the current belief set. When a contradiction is found, the statement(s) responsible for the contradiction are identified and the records are appropriately updated. This process is called dependency-directed backtracking.
    The TMS algorithm maintains the records in the form of a dependency network. Each node in the network is an entry in the KB (a premise, antecedent, or inference rule etc.) Each arc of the network represent the inference steps through which the node was derived.
    A premise is a fundamental belief which is assumed to be true. They do not need justifications. The set of premises are the basis from which justifications for all other nodes will be derived.
    There are two types of justification for a node. They are:
    1. Support list [SL]
    2. Conditional proof (CP)
    Many kinds of truth maintenance systems exist. Two major types are single-context and multi-context truth maintenance. In single context systems, consistency is maintained among all facts in memory (KB) and relates to the notion of consistency found in classical logic. Multi-context systems support paraconsistency by allowing consistency to be relevant to a subset of facts in memory, a context, according to the history of logical inference. This is achieved by tagging each fact or deduction with its logical history. Multi-agent truth maintenance systems perform truth maintenance across multiple memories, often located on different machines. de Kleer's assumption-based truth maintenance system (ATMS, 1986) was utilized in systems based upon KEE on the Lisp Machine. The first multi-agent TMS was created by Mason and Johnson. It was a multi-context system. Bridgeland and Huhns created the first single-context multi-agent system."

    Comment
    See also the section Dynamic Symbol Systems of the Clarification of the 28th of April 2016, which references TMS as well.

    We quote an online encyclopedia about the subject belief revision: "Belief revision (also called belief change) is the process of changing beliefs to take into account a new piece of information. The logical formalization of belief revision is researched in philosophy, in databases, and in artificial intelligence for the design of rational agents.
    What makes belief revision non-trivial is that several different ways for performing this operation may be possible. For example, if the current knowledge includes the three facts "A is true", "B is true" and "if A and B are true then C is true", the introduction of the new information "C is false" can be done preserving consistency only by removing at least one of the three facts. In this case, there are at least three different ways for performing revision. In general, there may be several different ways for changing knowledge.

    Revision and update
    [...]

    Contraction, expansion, revision, consolidation, and merging
    [...]

    [...]"

    Comment
    Multi-Agent Belief Revision (MABR) with ontology has already been discussed in relation to the

  • Belief-Desire-Intention (BDI) software (deliberative agent) model (or system), paradigm or architecture for rational and cognitive Agent-Based Systems (ABSs),
  • Foundation for Intelligent Physical Agents (FIPA) Java Agent Development Environment (JADE), and
  • real-time real-world Distributed Artificial Intelligence (DAI), including Multi-Agent System (MAS), Multi-Agent System (MAS) agency or Agent Society (AS), and Distributed Problem Solving (DPS).

    See also the

  • Clarification of the 13th of April 2022 and
  • ...

    Needless to say, integrated in our Evoos with its coherent OM and our OS with its OntoBot and everything else missing in our Evoos.

    Prior art
    Logics, Semantics, and Ontologics
    The complete or all-encompassing integration of the fields of

  • logics,
  • semantics, and
  • ontologics,

    specifically the related fields of

  • Knowledge-Based System (KBS), and
  • Ontology-Based System (OBS)

    has already been discussed multiple times in relation to our Evoos and our OS.
    In this relation, we would like to note that a lot of work has been done in all these fields and everything else missing was created and integrated with our Evoos and our OS.

    SmartKom, SmartWeb, Deep Map, and Co.
    The complete or all-encompassing integration of the fields of

  • Agent-Based System (ABS) or Agent-Oriented Programming (AOP),
  • DAI or MAS,
  • Model-Based System (MBS), and
  • Ontology-Based System (OBS), and
  • Multidimensional Multidomain Multilingual Multiparadigmatic Multimodal Multimedia User Interface (M⁶UI),

    and the paradigms of

  • Model-Driven Architecture (MDA),
  • Object-Orientation (OO 1),
  • Ontology-Orientation (OO 2), and
  • Ontologic(-Orientation) (OO 3)

    has already been discussed multiple times in relation to our

  • Evoos with its coherent Ontologic Model (OM), including the "Ontologic Object that provide a semantic representation of actions, processes, and objects", and multimodality, and
  • OS with its Ontologic roBot (OntoBot) and Ontologic Scope (OntoScope) components, and its Ontoscope (Os).

    SimSimi
    The complete or all-encompassing integration of the fields of

  • chatbot,
  • Dialog System, including Dialogue Management System (DMS),
  • Conversational System (CS or ConS), including Conversational Agent System (CAS or ConAS),
  • etc.

    has already been discussed multiple times in relation to our Evoos and our OS.
    SimSimi is a relatively simple chatbot, which is implemented as a QAS, was initially presented around (May) 2002, and marketed as app for our Ontoscope (Os) again in 2012 by using our publications.
    In contrast to one of the first chatterbots or chatbots ELIZA (after Eliza Doolittle in Pygmalion and My Fair Lady), SimSimi has a feature, which allows users to teach it to respond correctly and in this way to add new contents to its underlying database through interaction.
    But SimSimi is not a conversational system, which is a marketing claim. In fact, such chatbots only simulate a conversation and give a user an illusion of understanding.
    The voice-based User Interface (UI) came only later with the introduction of (voice-based or speech controlled) virtual assistants, and Intelligent Personal Assistants (IPAs), such as the Personalized Assistant that Learns (PAL) and the Cognitive Agent that Learns and Organizes (CALO), better known by the partial OntoBot variant Apple Siri (see the related section below), on our Ontoscope (Os), better known as the Ontoscope variants Apple iPhone and iPad, Android Smartphone, and other mobile devices.
    Chatbots and QAS like ELIZA (after Eliza Doolittle), SimSimi, etc. are prior art, which already draws the white, yellow, or red line from the legal point of view.

    Personalized Assistant that Learns (PAL) and the Cognitive Agent that Learns and Organizes (CALO)
    The complete or all-encompassing integration has already been discussed multiple times in relation to our Evoos and our OS with its OntoBot and Ontoscope (Os).

    Comment
    "Also note once again that this subsymbolic, connectionist, probabilistic, and statistic brute force approach is not sufficient, because it has so many already known and discussed problems and deficits, like for example common sense, bias, inheritance of biases, misuse, halucination, etc. respectively no trustworthiness, no transparency, no validation, no verification, and so on" (see also the note These 'R' Us Too of the 22nd of November 2023 the of November 2023).
    "About the brute force approach on the basis of ML, ANN, and SC we do not need to discuss anymore, because it is about quantity respectively probabilistics, statistics, fantasy, hallucination, etc., but not about quality respectively validation and verification, transparency, rationality, trust, etc." (see the note Do not be fooled by banks of the 11th of September 2023).
    Only with the discussed training, detection, and mitigation techniques based on integrated approaches and depending on the datasets and benchmarks commonly used for training and evaluation the precision or accuracy can be increased to around 75% over 82% to 91%.
    But obviously, the resulting models are no Large Language Models (LLMs) of the fields of Computational Linguistics, and Natural Language Processing and Natural Language Understanding anymore, but coherent Ontologic Models (OMs), including Foundation Models (FMs), Foundational Models (FMs), Capability and Operational Models (COMs), and so on, which function on the basis of natural, visual, and other languages, and also on our bidirectional Bridge from Natural Intelligence (NI) to Artificial Intelligence (AI) (Bridge from NI to AI). And the latter belongs to our copyrighted expression of idea presented with our original and unique sui generis works of art Evoos and OS in this genre of Artificial General Intelligence (AGI) or General Artificial Intelligence (GAI) and other genres. Is not it? :)

    In fact, we were already at this point around the year 1999 and already concluded around the year 1997 for other related reasons that we have to begin with ontology and later to add graph theory and more undisclosed ingredients of our secret sauce.
    We also concluded that we already had all combinations and variants of these logic, bionic, cybernetic, and ontonic fields and much more from other fields integrated, and needed a basic and literally spoken universal framework to handle the resulting synthesis by the user and the cybernetic self-reflection, self-portrait, or self-image, and also the overall system.

    Eventually, the era of the so-called Large Language Model (LLM) is already over since some few years inofficially and since several months officially. With

  • multimodality, and
  • operationality, and also
  • rationality,
  • validity,
  • verifiability,
  • explainability,
  • interpretability,
  • factuality, and
  • accuracy, as well as
  • safety and security,
  • reliability, and
  • optimality

    it is obvious that the adjective large does not refer to the size of a language model, but to the scope of its capability and functionality respectively that the designation Large Language Model (LLM) is merely an illegal synonym for a specific part of our coherent Ontologic Model (OM) and is used to mislead the public about the true origin.

    Consequently, there is no other bot anymore than variants of our one and only OntoBot of our original and unique, personal, scientifically fictitious, unforeseeable and unexpected, copyrighted sui generis work of art titled Ontologic System and created by C.S..

    It should be easy to see once again that our Evoos and our OS are taken as source of inspiration and blueprint, which often if not in all cases infringes the rights and properties of C.S. and our corporation.
    This leads us back to the start.

  •    
     
    © or ® or both
    Christian Stroetmann GmbH
    Disclaimer