 |
|
23:11 UTC+1
Figure AI blacklisted
Artificial Neural Network (ANN) MultiModal Model (MMM) (ANNMMM), including what is wrongly called Large Language Model (LLM) and Large MultiModal Model (LMMM), are copyrighted anyway (see the note These 'R' Us, still of the 19th of February 2024).
Please note that in general a removal of a company from our blacklist only becomes effective after the payment of damage compensations and the conduction of other mandatory legal actions.
23:45 UTC+1
1X Technologies blacklisted
The company will be removed from our blacklist with the establishment of new companies as joint ventures with the company Microsoft and others.
Please note that in general a removal of a company from our blacklist only becomes effective after the payment of damage compensations and the conduction of other mandatory legal actions.
05:08 UTC+1
SOPR introduced WHiVSiV regulation
What Happens in Vegas, Stays in Vegas (WHiVSiV)
We made some considerations about the exclusive and mandatory infrastructures of our SOPR and our other Societies in the Republic of India, the P.R.China, and other countries, unions of states, and economic zones.
Specifically, the P.R.China must find highly qualified members of its state party, who will be members of the boards of our local business units, and monitor and act in compliance with the local laws, so that global activities can happen at all.
Furthermore, the raw signals and data, informations, knowledge bases, belief bases, models, and algorithms of companies stay in the related sovereign territories and economic zones, specifically in Vegas.
For example, a vehicle manufacturer, that has it headquarters in for example India or China and is selling its products in for example the U.S.America, Canada, the U.K., or the European Union (EU) will not be allowed to sent such digital assets to its home country.
In this relation, we would like to note that such digital assets also belong to C.S. and our corporation in the legal scope of ... the Ontoverse, and in accordance with the Terms of Service (ToS) of our SOPR, our SOPR has the rights for unrestricted access to raw signals and data of its members and licensees anyway.
07:00 and 10:00 UTC+1
Clarification
In November 2023, we announced a clarification about the topics human emulation, human-like action, user reflection, impersonation, reflection of a reflection, etc..
Indeed, prior art comprises the utilization of the field of Artificial Neural Network (ANN) in the fields of Natural Language Processing (NLP or NatLP) (NLParsing and NLGeneration) and Cognitive Model (CM), but not in relation to the metaphor, hypothesis, vision, idea, or genre of the so-called Global Brain (GB), the fields of online search engine, Graph-Based Knowledge Base (GBKB) or Knowledge Graph (KG), etc.. The latter is already included in our Evolutionary operating system (Evoos) and our Ontologic System (OS).
Howsoever, we always viewed our general, domain independent Text-to-Text (T2T) transformation engine, Large Language Model (LLM) chatbot, softbot, etc., as somekind of a highly persuasive and entertaining one-trick pony of the fields of mentalism and Neuro-Linguistic Programming (NLP or NeuroLP). See also the message OS is ON of the 9th of May 2016.
This also shows the relationship of the isomorphism of signs, logic, and language to the isomorphism of semiotics, calculus, and semantics, which I also refer to as liquidity in reference to the CHemical Abstract Machine (CHAM).
We also explained that our works of art are also viewed as science fiction, thoughts of C.S. how things work without any proof, and personal performances to show visionary, creative, and inventive capabilities and competences, true origins of things, ontological argument or ontological proof, belief system, universal ledger, and so on (see also the related section in the note Everything related to OM, etc. blacklisted of the 7th of February 2024). In fact, no initial theory, experimentation, analysis, proof, and final conclusion exist, which are required and usual in methodically conducted modern sciences (begin with René Descartes to get into the subject matter). Even not an implementation exists, but only
mappings of natural, physical, biological, philosophical, etc. items to fictional, metaphysical, cybernetical, ontological, etc. items,
architectures, and
lists of related and suggested software, hardware, and organizations.
C.S. even wrote in The Proposal the following:
"On the basis of an [...] outlined analogy between an operating system and a brain, an attempt is made to construct the development and architecture of an operating system according to evolutionary and genetic aspects."
"[...] does not attempt to model or simulate a human brain. Rather, the target system to be modeled should be able to develop intelligence in the sense of [artificial intelligence,] machine learning, [soft computing, and artificial neural network] if this can be achieved at all."
"A possible link to an operating system of the human brain, on the other hand, cannot and should not be investigated [...]."
The reason is that C.S. already
talked about AI, ML, immobot, robot, and cyborg in relation to cybernetical and ontological reflection, augmentation, and extension, and
worked on something totally different and new than an "Analyse und Entwurf eines Betriebssystems nach evolutionären und genetischen Aspekten==Analysis and Design of an Operating System according to Evoultionary and Genetic Aspects".
And because the field of mentalism belongs to the art of magic and the field of NeuroLP, and NeuroLP belongs to the pseudo-sciences, we have another time the situation that no scientific theory is given in our original and unique case.
This is important from a legal point of view, because scientific theories, achievements, and understandings are not protected by the copyright, but expressions of the arts are protected.
In addition, we have the legal aspects of moral rights and Lanham (Trademark) rights, and also copyrights due to new expressions of idea, and compilations, integrations, architectures, and so on, which belong to an overall sui generis.
As a consequence, T2T engine, LLM chatbot, softbot, etc. are considered as copyright infringements in this regard.
And
Artificial Neural Network (ANN) MultiModal Model (MMM) (ANNMMM), including what is wrongly called Large Language Model (LLM) and Large MultiModal Model (LMMM) (see the note These 'R' Us, still of the 19th of February 2024),
utilization of LLM as a Capability and Operational Model (COM), and
control of Autonomous System (AS) and Robotic System (RS) based on our OM, COM, and LLM utilized as a COM
are copyrighted anyway.
And no, Rodney Brooks has not presented these expressions of idea, and compilations, integrations, architectures, and so on created by C.S..
But this is also important from the scientific point of view
We quote another time the document titled "Immobile Robots - AI in the New Millennium" and publicized in 1996: "We are only now becoming aware of the rapid construction of a ubiquitous, immobile robot infrastructure that rivals the construction of the World Wide Web and has the potential for profound social, economic, and environmental change. Tapping into this potential will require embodying immobots with sophisticated regulatory and immune systems that accurately and robustly control their complex internal functions. Developing these systems requires fundamental advances in model-based autonomous system architectures that are self-modeling, self-configuring, and model based programmable and support deliberated reactions. This development can only be accomplished through a coupling of the diverse set of high-level, symbolic methods and adaptive autonomic methods offered by [proper, logic-based or strong AI [as part of a mixture with emergence-based ML, CI, ANN, CV, and other subfields of SoftBionics (SB)]."
Comment
First of all, we repeat that there is no need to copy and steal, because we already hold the copyright for our
Universal Brain Space (UBS) or Global Brain of the second generation (GB 2.0),
Cyber-Physical System of the second generation (CPS 2.0),
Internet of Things of the second generation (IoT 2.0),
Ubiquitous Computing of the second generation (UbiC 2.0),
Networked Embedded System of the second generation (NES 2.0),
Ambient Intelligence of the second generation (AmI 2.0),
and so on, including our
- Industry of the fourth generation (I 4.0) (ontology and digital twin),
- Industry of the fifth generation (I 5.0) (eXtended Mixed Reality (XMR) or simply eXtended Reality (XR), etc.), and
- Industry of the sixth generation (I 6.0) (Cognitive System (CS or CogS), Ontologic Model (OM) (e.g. Large Language Model (LLM)) and Ontologic Computing (OC) (e.g. transformative, generative, and creative Bionics, prompt engineering, forming, shaping, configuring respectively cognitive processing, educating, teaching, learning, etc.)),
- Industrial Internet of Things (IIoT),
- and so on,
as well as our
Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV) respectively Ontoverse (Ov) and New Reality (NR), including our rational
- Ontologic roBot (OntoBot),
- Ontologic Computer-Aided technologies (OntoCAx),
- and so on.
And no, the segmentation and separation, and the edition, modification, and reduction tricks do not work.
There is absolutely no doubt, that we have new expressions of idea, and compilations (collections and assemblings, stacks, etc.), integrations, architectures (models, etc.), and so on of the prior art.
By the way:
Please note that the compliance with the Terms of Service (ToS) of our SOPR is required for our original and unique, copyrighted expressions of idea, compilations, integrations, architectures, and so on of os-level and network virtualization, container and microservice management, orchestration, federation, and runtime systems, platforms, components, applications, and services, including Docker, Kubernetes, Mesos, and Co., Cloud Computing of the third generation (CC 3.0) (e.g. hybrid and multi-cloud, Cloud-native Computing (CnC)), Cloud Artificial Intelligence, the items listed above, and all the other items listed elsewhere on our websites.
It is not an Application Programming Interface (API), Virtual Machine (VM), guarantee and protection of accrued talent, seamless technological development, no mandated steps of an ordinary technological progress, and no technical benefit for the society, freedom of choice, innovation, and competiton pro bono publico==for the benefit of the public, or whatsoever, but essential infrastructures that we do own, manage, and control, and also exploit (e.g. commercialize (e.g. monetize)). :D
03:39 UTC+1
SOPR considering licensing of Os like OAOS
Ontoscope (Os)
Ontologic Applications and Ontologic Services (OAOS)
We always make clear that our Ontoscope is not what is wrongly called smartphone or AI phone, but a work of art and an essential part of our OS, which is a self-reflection, and also a cybernetic and ontonic reflection, augmentation, and extension, which means a multimedia work of art and a sculptur.
For a better understanding, we have compared it with the
head of a robot,
Golden Compass on the 12.12.2007, and
Holocron in the Original of the 26th of November 2010.
See also the note T2T, LLM chatbot, softbot, etc. © infringement of the 3rd of March 2024 (yesterday).
Our Ontoscope includes the basic variants
mobile robot,
immobile robot
humanoid robot,
industry robot,
etc.,
in all variants with
torso,
legs,
arms,
heads,
wheels,
wings,
etc.,
as
handheld,
wrist worn,
head-mounted,
implanted,
etc.,
box,
fabric,
Printed Circuit Board (PCB),
System on a Chip (SoC),
etc..
Like in case of discounts we have no reason and do not see any compromise, win-win, and so on to lower licensing fees for related HardWare (HW).
04:33 UTC+1
SOPR will not join Open Invention Network (OIN)
The Open Invention Network (OIN) and its members and licensees, including the companies IBM, Alphabet (Google), and Microsoft, and also the Free and Open Source Software (FOSS) foundations Linux Foundation, Apache Foundation, and Mozilla Foundation, cannot protect illegal Free and Open Source Software (FOSS) in the legal scope of ... the Ontoverse (Ov).
And it should be more than obvious, that our coporation, specifically our Society for Ontological Performance and Reproduction (SOPR), will not join the OIN or any other fraudulent or even serious criminal foundation.
But we demand all entities concerned, including research institutes, universities, 501(c) organizations, and industrial companies, to
sign waivers regarding their copyrights on all illegal materials, which is common practice, for example when a license is changed, and
transfer all illegal materials, including patents, trademarks, models, algorithms, databases, source codes, publications, etc. to our SOPR
As Soon As Possible Or Better Said Immediately (ASAPOBSI).
Eventually, no protection by the Open Invention Network (OIN) is effective, specifically in relation to what is wrongly called Cloud-native Computing (CnC) and also called Cloud Computing of the third generation (CC 3.0) by us only for better understanding, which in fact is an essential part of our original and unique Evolutionary operating system (Evoos) and Ontologic System (OS) and the exclusive and mandatory infrastructures of our SORP and our other Societies.
Nice try, but we will not be blackmailed by black labor.
03:36 and 05:30 UTC+1
Ontonics Further steps
Our Hightech Office Ontonics together with our OntoLab and Society for Ontological Performance and Reproduction (SOPR) is refining legal, economical, and other matters.
As we said in case of an establishment of a new company as a joint venture, a written admission of guilt is not required due to the resulting overall legal scope. But a written court-proof confirmation of exclusive rights (e.g. moral rights respectively Lanham (Trademark) rights) and properties (e.g. copyrights, raw signals and data, digital and virtual assets, and online advertisement estate) of C.S. and our corporation is highly advantageous to
increase the legal resilience, specifically in relation to the copyright law, Lanham (Trademark) act, and competition law, and
convince other entities to become a member and licensing partner of our SOPR and make them happy in relation to other matters.
We are also looking at options for
fair restructuring,
increased efficencies, and synergies,
further activities of research and development,
establishments of completely new business divisions,
market introductions of new technologies, goods, and services, and
other aspects,
which can only be realized under the roof of our corporation, specifically in relation to the
already existing businesses and also
undisclosed second catalogue of C.S..
14:09 and 14:56 UTC+1
Deutsche Börse Digital Exchange blacklisted
The exclusive and mandatory infrastructures of our SOPR and our other Societies already include the
Ontologic Financial System (OFinS) with its Ontologic Bank (OntoBank) with its Ontologic Exchange (OntoEx), and
Universal Ledger (UL).
We also have our own digital and virtual currencies with the OntoCoin, OntoTaler, and Qoin, and it is becoming more and more likely that they will become the official money in the legal scope of the ... Ontoverse (Ov).
And we made very crystal clear since some years that we do not want that crypto crap in the legal scope of ... the Ontoverse (Ov).
Bitcoin, Ether, Tether, and Co. will not continue with infringing the rights and properties of C.S. and our corporation, definitely not, because we are already working on pulling their plugs, cutting their lines, and taking back all illegal materials.
There will be no Distributed Ledger Technology (DLT), Decentralized Web (DWeb), and distributed platforms as part of what is wrongly called Cloud-native Computing (CnC), because all is already our Evolutionary operating system (Evoos) and our Ontologic System (OS) and we provide them as part of the set of foundational and essential facilities, technologies, goods, and services, but nobody else.
Eventually, no reason exists to buy that crypto crap, which also has been discussed in the last years.
Unbelievable, they
give away our rights and properties, the foundations of that nonsense, for free on the one hand, but
think they could make the business with bets on that nonsense with their funds and trades on their exchanges all alone on the other hand.
Very clever (not really).
And there will be no compromises in relation to the rights and properties of C.S. and our corporation, and therefore no further negotiations.
Everybody has been warned again and again multiple times and, nice as we are, they even got more than sufficient time to get out of that illegal business by somekind of a soft landing. Now it is heading into a hard landing with total crash.
By the way:
Why not simply stay with gold and other precious metals? This is even tangible and legal, tried and proved.
14:56 UTC+1
Bitcoin Group and Bitcoin.de blacklisted
This action should be self-explanatory for everybody concerned and interested.
06:31 and 26:22 UTC+1
OpenAI is just many things it should not be
For sure, the establishment of OpenAI was an act of conspiracy and plot against C.S. and our corporation, as already proven, because Bill Gates with the company Microsoft (1 or more of its researchers were fouding members) and Elon Musk with the company Tesla Motors want to get our Evolutionary operating system (Evoos) and our Ontologic System (OS) under their control, because they both do know and in the meantime also the governments, commissions, federal authorities and agencies, industries, and other parts of their societies do know that without our Evoos and our OS their power monopolies do not work.
The latter shows once again why we have exclusive rights and properties and why eventually OpenAI goes to our corporation as part of the transition process, comprising the payment of damage compensations and offset of business performances, the transfer of all illegal materials, and so on.
The lawsuit of E. Musk against OpenAI is just another attempt to get access to OpenAI's illegal implementations of our coherent Ontologic Model (OM), which was stolen from C.S. and our corporation, before the reconstitution, restoration, and restitution of all of the rights, properties, reputations, and momenta, as well as follow-up opportunities of C.S. and our corporation is taking place in the next future, though we will do these legal actions anyway in relation to OpenAI, Tesla Motors, Microsoft, and Co..
And now we got first-hand evidences from OpenAI about what we always knew, that Elon Musk with Tesla Motors, Microsoft with its researchers, and other entities only established OpenAI, because they wanted to steal our business units SoftBionics and Roboticle and more parts of our corporation for their own companies Tesla Motors and Microsoft with its own business units (e.g. Bing and Azure), and other business activities as their next illegal actions.
"This needs billions per year immediately or forget it," Musk emailed. "I really hope I'm wrong."
"As we discussed a for-profit structure in order to further the mission, Elon wanted us to merge with Tesla or he wanted full control", OpenAI said in a blog.
And then Sam Altman said to himself "If E. Musk can just take the "absolute control", then I can do so as well without him." And also the management of Microsoft concluded the same and provided the billions of U.S. Dollar required to fund the costs of the computing power.
The latter also provides us first-hand evidences about the next move of conspiracy and plot against C.S. and our corporation by Microsoft and OpenAI in addition to the evidense, which emerged by the actions in the end of 2023.
We also got first-hand evidences that they all knew in 2015 how much worth the exclusive rights and properties of C.S. and our corporation truly are.
In addition, we got more evidences that they infringed and are still infringing the rights and properties of C.S. and our corporation as their "cash cow". For this reason, Microsoft also wanted to steal our Society for Ontological Performance and Reproduction (SOPR) to finance this incredibly expensive endeavour, because "[t]his needs billions per year".
In case of Microsoft, this is also no surprise for us and led to our x% + y% takeover demand as part of the damage compensations. We retain total control by ourselves over the rights and properties of C.S. and our corporation, which are exclusively managed and exploited (e.g. commercialized (e.g. monetized)) by our Society for Ontological Performance and Reproduction (SOPR) with the consent and on behalf of C.S..
And we have documented the whole case, including our publications and their reactions in
2015 when they stole our integration of the Capability-Based operating system (CBos) microkernel L4 in relation to the field of SoftBionics (SB) (e.g. AI, ML, CI, ANN, CV, CA, etc.),
2016 when the whole industy, including Alphabet (Google) and Co., changed the direction of research and development after our now legendary clarifications, and
2017 and the following years.
Obviously, the only entities playing with legal and open cards are C.S. and our corporation.
The whole story is a very typical business situation and human traid. Is not it?
By the way:
We will not pay for that mess with what is wrongly called Cloud-native Computing (CnC) and generative Artificial Intelligence (genAI).
There will be no legal solution other than what we worked out and submitted.
What we can offer OpenAI is a
- reasonable remuneration for truly own performances, which for sure is not 90 billion U.S. Dollar but maybe 99% + 1% due to the
- lack of own rights and properties on the one hand and
- causation of damages and resulting claims for compensation in the high three-digit billion if not already in the low one-digit trillion U.S. Dollar range, which are far exceeding this totally ridiculous evaluation on the other hand,
and
- legal continuation at one of our
- business units, such as for example SoftBionics and Roboticle, or
- newly established companies as joint ventures.
See also the note Monopoly card is already golden exit of the 6th of February 2024.
General Artificial Intelligence (GAI) or Artificial General Intelligence (AGI) is not generative Artificial Intelligence (genAI), and Artificial generative Intelligence (AgenI) does not exist at all.
04:44 UTC+1
Way of stealing irrelevant for copyright
We quote a first report, which is about a case of espionage at the company Alphabet (Google):""The Justice Department will not tolerate the theft of artificial intelligence and other advanced technologies that could put our national security at risk," [a U.S.American Attorney General] said [...] adding that "we will fiercely protect sensitive technologies developed in America from falling into the hands of those who should not have them"."
We quote a second report, which is about a case of espionage at the company Alphabet (Google):"FBI Director [... the thief]'s alleged actions "are the latest illustration of the lengths" companies in China will go to, "to steal American innovation"."
We quote a third report, which is about a case of espionage at the company Alphabet (Google): "[...] Google's vast A.I. supercomputer data system [and] the "architecture and functionality" of the system, and [...] software used to "orchestrate" supercomputers "at the cutting edge of machine learning and A.I. technology"."
Comment
First of all, we have to make clear that the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S. are not U.S.American innovation and development, and therefore the U.S.American Department of Justice (DoJ), the Federal Bureau of Investigation (FBI), or any other part of the U.S.America has no control over said rights and properties.
01:30 UTC+1
Ontonics Further steps
No further steps will be made until we have finished our set of legal documents.
As we said in the note % + %, % OAOS, % HW of the 28th of February 2024, "[t]he discussions, reasonings, terms and conditions, [as well as estimations about the ratios of company shares,] and other relevant matters have been publicized."
06:29 and 06:36 UTC+1
% + %, % OAOS, % HW #2
The discussions, reasonings, terms and conditions, etc. have been publicized.
Next joint ventures are with the companies Broadcom and Oracle.
Some non-binding examples for the ratios of company shares estimated by us:
Us:Broadcom 90%:10%
90% - 31.00% = 59.00%
90% - 28.75% = 61.25%
90% - 20.00% = 70.00%
90% - 17.00% = 73.00%
90% - 15.00% = 75.00%
Us:Oracle 80%:20%
80% - 28.75% = 51.25%
80% - 21.00% = 59.00%
80% - 20.00% = 60.00%
80% - 17.00% = 63.00%
80% - 15.00% = 65.00%
Us:Others 51+%:49-%
Both have to get the allowance and license for the performance and reproduction of certain parts of our Ontologic System (OS) anyway, which requires the compliance with the
national and international laws, regulations, and acts, as well as agreements, conventions, and charters,
rights and properties of C.S. and our corporation, and
Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) Terms of Service (ToS) with the License Model (LM) of our Society for Ontological Performance and Reproduction (SOPR).
One of the related demands is the transfer of all illegal materials, which in case of
Broadcom includes virtually the whole VMware (for sure we will act in case of Dell accordingly as well) and
Oracle includes, guess what, the Oracle Cloud.
Please note the following 2 facts:
The establishment of a new company as joint venture respectively execution of a business takeover or merger is part of the reconstitution, restoration, and restitution of all of the rights, properties, reputations, and momenta, as well as follow-up opportunities of C.S. and our corporation, or simply said, it is a transaction of damage compensations.
The increase in business value is directly connected with the
transition from Web Services (WS) to operating system (os) and Distributed System (DS) respectively what is wrongly called Cloud-native Computing (CnC) by others and called Cloud Computing of the third generation (CC 3.0) by us only for better understanding
- Amazon Web Services (AWS), AWS Outposts, and VMware Cloud on AWS and AWS Outposts,
- Microsoft Azure Stack, etc.,
- Google Anthos, etc.,
- Cisco Software-Defined Wide Area Network (SD-WAN), etc.,
- and so on
a first time, with the
provision of SoftBionics as a Service (SBaaS) technologies (SBaaSx) (e.g. SBaaS business, capability, and operational models, (sub)systems, and platforms),
a second time, and with the
introduction of generative Bionics
a third time.
09:37 UTC+1
Crypto Open Patent Alliance (COPA) blacklisted
Our
Peer-to-Peer Virtual Machine (P2PVM),
Distributed Ledger Technology (DLT), which is based on our integration of the
- Reiser4 file system with
- "full file metadata support",
- "metadata is stored as sub-files, so a file is actually a folder and a file",
- "atomicity supports transactions",
- "efficient journaling through [...] logs",
- "flexible plug-in system", and
- cryptographic add-on module or encryption plug-in module (see documentation of Reiser4), and
- "plug-in mechanism cryptographic algorithms can be added to the file system and integrated into security related operations",
and
- Berkeley Open Infrastructure for Network Computing (BOINC)
as part of our
Ontologic File System (OntoFS) with the basic properties of our Ontologic System (OS) of (mostly) being
"reflective", and
"validated and verified", and by the reflective property evaluable and verifiable, and validating and verifying, as well as
"specification- and proof-carrying"
(see the )
as part of our
resilient Distributed operating system (Dos) with
- atomic active object model (see Cognac based on Aperion (Apertos (Muse)))),
which eventually and obviously includes the blockchain technique,
and also the
field of Algoritmic Information Theory 3 (AIT 3) and
closely related field of prime factorization, and
more foundational parts of cryptocurrencies, like for example Bitcoin, Ether, Tether, and Co., what is wrongly called Decentralized Web (DWeb) and Web3, etc. have been created or integrated or both with our Evolutionary operating system (Evoos) with its Evolutionary operating system Architecture (EosA) and our Ontologic System (OS) with its Ontologic System Architecture (OSA), which integrates all in one by its liquidity (see CHemical Abstract Machine (CHAM), microService technologies (mSx) (Java, Prolog, Mozart/Oz, Active Object, Actor, Agent, etc., the Unix way), etc.), generative and creative Bionics, and so on.
Therefore, Bitcoin, Ether, Tether, and Co. are merely illegal Ontologic Applications and Ontologic Services (OAOS) (e.g. prime factorization with BOINC and DLT) and we do hold the right to allow and license all parts of our Evoos and our OS.
Court-proof. Worldwide. Hasta la vista, crypto kiddies.
So much about legal software development, illegal Free and Open Source Software (FOSS), and the truth.
09:41 UTC+1
Unix chroot jail before red line
We reviewed our
Clarification of the 1st of December 2018,
Investigations::Multimedia of the 11th of December 2018 (keywords hypervisor, kernel-less, and space),
Investigations::Multimedia of the 16th of March 2019,
Clarification of the 29th of March 2019,
Clarification of the 21st of January 2020 (keyword jail), and
OntoLix and OntoLinux Further steps or Clarification of the 3rd of April 2021.
The publication of The Proposal describing our Evolutionary operating system (Evoos) was publicized more than 4 months before the publication of the document describing FreeBSD jail. In case of our Evoos the first verbal communication of C.S. and at least one other person was in March 1999, while in case of FreeBSD one of the main developers claims to have had the first verbal communication with an Internet Service Provider (ISP) about jail later in 1999. Maybe and therefore the date of publication counts.
The implementation and introduction of our operating system Virtual Machine (osVM), operating system Hypervisor (osHyper), Kernel-based Virtual Machine (KVM) hypervisor, operating system-level Virtualization (osV) or containerization (kernel process containers or address spaces), control groups (cgroups), and namespaces by others were done after the publication of our Evoos and therefore are after the red line and constitute infringements of the moral rights and the copyrights of C.S. and other rights and properties of our corporation.
The implementation and introduction of Linux Kernel-based Virtual Machine (KVM) hypervisor were done in 2006 after the publication of our Evoos.
The implementation and introduction of Google Borg (cluster, container, KVM, and lock service) were done after the publication of our Evoos, while Google Borg (cluster, cgroups, KVM, and lock service) were done after the publication of our OS. We guess that only the later variant existed, because its own container patch V1 has not been referenced in the document titled "Large-scale cluster management at Google with Borg" and publicized only in April 2015. The latter shows an attempt to fabricate prior art in relation to our OS.
Docker (container and (package) (deployment) automation), Kubernetes (cluster, Key-Value (K-V) store, and orchestration), Knative (Event-Driven Architecture (EDA)), gVisor (Docker and sandbox), Container as a Services (CaaS), virtualized containers, hypercontainers, Kata Containers, Firecracker (Virtual Machine Monitor (VMM) or Hypervisor (Hyper), microVirtual Machine (mVM), processor emulation with Hardware Abstraction Layer (HAL), Kernel-Less operating system (KLos), container, and Agent-Based System (ABS)), Hypernetes (Kubernetes, Software-Defined Networking (SDN), and management), ServerLess technologies (SLx), Cloud-native technologies (Cnx), etc. already after red line, hence infringements of moral rights and copyrights.
We do have
real-time system,
reflective system,
nanokernel or Hardware Abstraction Layer (HAL),
microkernel (μ-kernel),
Kernel-less operating system (KLos) (OS),
Distributed operating system (Dos),
Virtual Machine Monitor (VMM) or Hypervisor (Hyper),
Virtual Machine (VM),
Virtual Virtual Machine (VVM) (reflective operating system with Virtual Machine (VM)),
Multi-Virtual Machine (MVM) (OS),
(microkernel) operating system (os) framework, including microkernel (μ-kernel), Hypervisor (Hyper), Virtual Machine (VM), Runtime Environment (RE), Input/Output (I/O) drivers, platform and device management, including Advanced Configuration and Power Interface (ACPI) and Peripheral Component Interconnect (PCI), etc. (OS, e.g. L4 - Linux with KVM - L4Re; L4 - Linux (with or without KVM) - kernel-less reflective, fractal, holonic OntoL4Re; kernel-less reflective, fractal, holonic O# _ OntoL4 _ L4Debian _ OntoL4Re),
Network Virtualization (NV) (operating system Immobile Robotic System (ImRS or Immobot) with network functionalities (e.g. driver for network card) and fusion of the hearing and the speaking with network card, and also Central Nervous System (CNS), Autonomic Nervous System (ANS), etc.),
operating system Hypervisor (osHyper) (reflective operating system with Hypervisor (Hyper)),
operating system Virtual Machine (osVM) (reflective operating system with Virtual Machine (VM)),
operating system-level Virtualization (osV) or containerization (reflective operating system with Virtual Machine (VM) respectively reflective osVM),
microkernel Hypervisor (microHyper or μ-Hyper) (reflective microkernel-based operating system with Hypervisor (Hyper)),
microkernel Virtual Machine (microVM or μ-VM) (reflective microkernel-based operating system with Virtual Machine (VM)),
microkernel-level Virtualization (microV or μ-V) or microcontainerization, and microVirtual Machine (mVM) (reflective microkernel with Virtual Machine (VM) respectively reflective μ-VM),
Remote Procedure Call (RPC) protocol and Remote Method Invocation (RMI) protocol,
asynchronous system function without system call or (kernel-less) asynchronous, non-blocking, exception-less operating system function (OS),
Associative Memory (AM (e.g. BlackBoard System (BBS) (central space of a Multi-Agent System (MAS) (e.g. System of Loosely Coupled Applications and Services (SLCAS), Tuple Space System (TSS) (e.g. JavaSpaces), Linda System (LS), Linda-like System (LlS), etc.),
(atomic) Active Object- and Actor Model-based or -oriented system (concurrent and lock-free or non-blocking) (e.g. concurrent Object-Oriented Actor Model-based system (e.g. BlackBoard System (BBS)), Java Jini, Maude, etc.),
Agent-Based System (ABS),
Actor-Agent-Based Sytem (AABS),
holologic or holonic system,
Resource-Oriented technologies (ROx),
Event-Driven technologies (EDx),
Space-Based technologies (SBx),
microService technologies (mSx), including microService-Oriented technologies (mSOx) (Java, Prolog, Mozart/Oz, Active Object, Actor, Agent, etc., the Unix way, arranges an application as a collection of loosely coupled services, which can be deployed as containerless well-behaved os services or within a lightweight container),
Java JiniOS,
Ultra-Large scale, Massively Distributed System (ULMDS) or Ultra Large Distributed System (ULDS) on the basis of and for the use with massively distributed, loosely coupled resources, objects, etc.,
Grid Computing (GC or GridC) (cluster of networked, loosely coupled computers acting in concert) (OS),
Parallel Computing (PC or ParaC) (OS),
Cluster Computing (CC or ClusterC) (OS), and
all the other things
created or integrated or both with our Evoos and our OS.
12:24 UTC+1
SOPR has raw signals and data, digital and virtual assets, and MfE
Marketplace for Everything (MoE)
All contracts in relation to our coherent Ontologic Model (OM), specifically Large Language Model (LLM), are void in whole or in part.
16:55 and 17:55 UTC+1
We guess that Satoshi Nakamoto is IBM
The
very special parts,
high complexity, and
very exact similarities
of Bitcoin highly suggest that the creator of this very specific variant and utilization of our Ontologic Applications and Ontologic Services (OAOS) has taken our Ontologic System (OS) as source of inspiration and blueprint, and also deliberately infringed our copyright.
This view is supported by the fact that in the next step a first plagiarism and fake of our Peer-to-Peer Virtual Machine (P2PVM) was implemented, which is based on Bitcoin and highly supported by IBM and Microsoft.
Furthermore, there must be a reason why the alleged creator of Bitcoin, Satochi Nakamoto, has not come forward and has not sold one of his 1.1 million Bitcoins. We guess that it is due to the extremely high damage compensations, which we demand for our Distributed Ledger Technology (DLT) on the one hand and the responsible entity already has several billions of U.S. Dollar under control on the other hand. The latter supports another time our guess of a company.
And there are only a handful of persons worldwide other than C.S., who do have the required knowledge and competence. All crypto freaks and other characters, who we do know, would already be multi-billionaires officially.
13:18 UTC+1
Still big legal deficits with crypto ET things
Exchange-Traded Fund (ETF)
Exchange-Traded Commodity (ETC)
Exchange-Traded Product (ETP)
Exchange-Traded Note (ETN)
The main problems of cryptocurrencies included in such financial products have not been solved, and also where not the subject of the ruling by the court. The latter merely ruled that digital and virtual currencies can be traded like any other commodity, property, and asset.
For example, the Non-Fungible Tokens (NFTs) or blocks of cryptocurrencies are anonymous respectively are unrelated to a legal entity. For sure, the issuers of such financial products are known, but not their sources of NFTs.
Furthermore, they are substitutes for money, which should be governed by central banks, federal reserve systems, and other monetary authorities, but not some crypto kiddies, con men, and so on.
21:14 UTC+1
Investors must be very careful now
The damages induced in the field of Hard- and SoftBionics can break every financial guard, even of large banks and national wealth funds. They do not have a jester's license in the legal scope of ... the Ontoverse (Ov), also known as OntoLand (OL).
The developments in Silicon Valley (SV), Silicon Alley (SA), and at other locations (et al) are only the beginning. And the next ones are already targeted.
External investors should not invest in technologies, goods, and services, which belong to the exclusive and mandatory infrastructures of our Society for Ontological Performance and Reproduction (SOPR) and our other Societies, because we are prepared and our legal teams is becoming bigger and bigger and makes no prisoners.
22:35 UTC+1
Equal is Not Equal
A philosopher: "There is no peace without equality."
C.S.: "That is why the law is already the compromise. And everyone is equal before the law.
And without law and order there is no democracy and no peace of justice or legal peace, but only strife.
But some still believe that they are more equal."
See also the Comment of the Day of the 12th of May 2010.
Gleich ist Nicht Gleich
Ein Philosoph: "Frieden gibt es nicht ohne Gleichheit."
C.S. "Deshalb ist auch das Gesetz bereits der Kompromiss. Und vor dem Recht sind alle gleich.
Und ohne Recht und Ordnung gibt es keine Demokratie und keinen Rechtsfrieden, sondern nur Unfrieden.
Aber einige sind dennoch der Meinung, dass sie gleicher sind."
Siehe auch den Kommentar des Tages vom 12. Mai 2010.
09:03 UTC+1
SOPR does not tolerate illegal infrastructures
Our Society for Ontological Performance and Reproduction (SOPR) does not tolerate any illegal realization of the exclusive and mandatory infrastructures of our SOPR and our other Societies with their set of fundamental and essential
- facilities (e.g. buildings, data centers, exchange points or hubs, and transmission lines),
- technologies (e.g. systems (e.g. backbones, core networks, or fabrics), platforms, etc.),
- goods (e.g. applications, devices, robots, and vehicles), and
- services (e.g. as a Service (aaS) technologies (e.g. Capability and Operational Models (COMs), Capability and Operational Systems (COSs), and Capability and Operational Platforms (COPs), (core) Infrastructure as a Service technologies (IaaSx), (utility) Technology as a Service technologies (TaaSx), (utility) Platform as a Service technologies (PaaSx), Service as a Service technologies (SaaSx), Data as a Service technologies (DaaSx), Trust as a Service technologies (TaaSx), SoftBionics as a Service technologies (SBaaSx), etc.)),
specifically infrastructures for what
is wrongly called Cloud Computing and AI SuperComputing and
has a causal link to our Evolutionary operating system (Evoos) and our Ontologic System (OS) and has taken them as source of inspiration and blueprint, and is done so to interfere in, and also obstruct, undermine, and harm the exclusive moral rights respectively Lanham (Trademark) rights (e.g. exploitation (e.g. commercialization (e.g. monetization))) of them.
The same holds for other essential parts of our Evoos and our OS, specifically our coherent Ontologic Model (OM), including MultiModal Language Model (MMLM) and Large Language Model (LLM) utilized as Capability and Operational Model (COM), Capability and Operational System (COS), and Capability and Operational Platform (COP), and also our Ontoscope (Os), also wrongly called smartphone, AI phone, iPhone, and so on.
By the way:
Mesos (Nexus) and Data Center operating system, Kubernetes, OpenShift, Enterprise Linux CoreOS (Container Linux (CoreOS Linux)), Clear Linux OS and Clear Containers, Hyper.sh, runV, Kata Containers, and Hypernetes, gVisor, Firecracker, KataOS, and much more are illegal implementations of the Ontologic System Architecture (OSA) and Ontologic System Components (OSC), specifically of our expression of idea, compilation, integration, architecture, etc., like for example our system integrations, system mixtures, system sandwichs, and system stacks
- L4Debian based on
- L4 microkernel - Linux or other Unix-like operating system - L4Re operating system framework in general and
- L4 microkernel - Debian - O# - L4Re operating system framework or OntoL4Re framework in particular,
and
- OntoL4OntoLinux respectively OntoL4OntoLix based on
- O# - L4 microkernel - Linux or other Unix-like operating system - L4Re operating system framework in general and
- OntoL4 - L4Debian in particular,
with the basic property of (mostly) being kernel-less reflective, fractal, holonic, and including the cluster functionality, and much more.
Please note and tell your laywers, because we tell our laywers, prosecutors, and judges, that the
- Unix-like os is already a complete os and
- L4Re already includes the L4 microkernel, Virtual Machine Monitor (VMM) or hypervisor, Virtual Machine (VM), Runtime Environment (RE), Input/Output (I/O) drivers, platform and device management, including Advanced Configuration and Power Interface (ACPI) and Peripheral Component Interconnect (PCI), etc., so that L4Re only makes sense in kernel space or user space.
(Indeed, the forest and the trees, but now we can see the whole landscape with our messed up estate even better. Funnily enough, they needed several years to find out and, even better, before they stole this from us they stole something else as is also the case with that Web Services (WS) at first and then Cloud Computing, and that deep learning at firsts and then transformer, and that other matter.)
This becomes even more clear with the field of Robotic Automation technologies (RAx), the smart contract transaction protocol, the blockchain technique, and much more.
Eventually, we will not give the allowance and license for what is wrongly called Cloud-native Computing, Software-Defined Networking, Artificial Intelligence SuperComputing, Network Artificial Intelligence, and so on.
Court-proof. Worldwide.
See also the note Unix chroot jail before red line of the 16th of March 2024 and the other publications cited therein.
See also the note % + %, % OAOS, % HW of the 28th of February 2024, because this is the basis of all following discussions, but not the obvious and court-proof infringements of the rights and properties of C.S. and our corporation anymore.
This is law and order. This is the compromise. This is equality. This is democracy.
As we have already said, we are now writing the legal documents and will then bag them up and send them by real, physical mail as registered mail with acknowledgement of receipt.
And anybody, who still refuses to submit to the law, will be visited by our lawyers with the bailiffs and the police, who will then switch off the illegal data centers and confiscate our properties respectively all illegal materials.
Wie wir schon gesagt haben, wir schreiben jetzt die rechtlichen Dokumente und werden sie dann eintüten und dann per realer, physikalischer Post als Einschreiben mit Rückschein verschicken.
Und wer sich dann immer noch nicht dem Gesetz unterwerfen will, der bekommt Besuch von unseren Anwälten mit dem Gerichtsvollzieher und der Polizei und die schalten dann die illegalen Datencenter ab und konfiszieren unser Eigentum beziehungsweise die gesamten illegalen Materialien.
10:23 and 12:40 UTC+1
Just for the unteachables
It should be well known by now that we have the opinion that especially the exclusive moral rights and Lanham (Trademark) rights of C.S. have been infringed besides other rights and properties of C.S. and our corporation and that we consider the related actions of companies of the Information and Communication Technology (ICT) industrial sector, other industrial sectors, and other entities to be illegal.
We have communicated the matter extensively over many years.
With the consent and on behalf of C.S., we alone are entitled to decide whether and when an expression of idea, compilation, architecture, etc. created by C.S., such as for example our Evolutionary operating system (Evoos) with its Evolutionary operating system Architecture (EosA) and our Ontologic System (OS) with its Ontologic System Architecture (OSA), will be introduced, implemented, and launched in the United States of America, the European Union, the People's Republic of China, and any other country.
The courts follow this argument. It prohibits ICT companies and other entities from using the expressions of idea, compilations, architectures, etc. in the course of trade at least in the U.S.America and the European Union by means of a temporary injunction and later also by a judgment. And in all countries, where this is not the case, we do have other measure to make them happy, nice and neutral as we always are. :)
The legal situation in general, and the copyright law, trademark law, and competition law in particular are clear. Only we, as the owner of the expressions of idea, compilations, architectures, etc. can decide if and when the performance and reproduction of these expressions of idea, compilations, architectures, etc. enter which markets as implementations of technologies, goods, and services. Only then are third parties permitted to economize these expressions of idea, compilations, architectures, etc..
The Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) Terms of Service (ToS) with the License Model (LM) include regulations for the
reconstitution, restoration, and restitution, and also transition processes,
payment of damage compensations, which are the higher of the apportioned
- triple damage compensations induced, resulting from
- unpaid royalties for unauthorized performances and reproductions,
- obmitted referencing respectively citation with attribution, and
- thwarted, obstructed, blocked, and otherwise missed commercial business possibilities and follow-up opportunities,
- profit generated illegally, or
- value (e.g. share price, market capitalization) increased or gained illegally
by
- performing and reproducing our Evoos and our OS in whole or in part without authorization respectively allowance and license, and
- interfering with, and also obstructing, undermining, and harming the exclusive moral rights respectively Lanham (Trademark) rights of C.S and our corporation,
payment of admission fees,
payment of outstanding royalties,
written admission of guilt,
written confirmation of exclusive
- rights of C.S. and our corporation, including
- moral rights respectively Lanham (Trademark) rights
and
- properties, including
- copyrights,
- raw signals and data,
- digital and virtual assets, and
- online advertisement estate
transfer of all illegal materials,
establishment of new companies as joint ventures respectively execution of company takeover or merger, also known as the golden power regulation,
utilization of the exclusive and mandatory infrastructures of our Society for Ontological Performance and Reproduction (SOPR) with their set of fundamental and essential facilities, technologies, goods, and services, including the cloud, Ontologic Model (OM), Capability and Operational Model (COM), MultiModal (MM), Foundation Model (FM) and Foundational Model (FM), Large Language Model (LLM) as COM, Global Brain (GB), etc., etc., etc., if and only if no interference with, and also obstruction, undermining, and harm of the exclusive rights and properties of C.S. and our corporation takes place in this way as required by the laws effective, and
payment of running royalties for Ontologic Applications and Ontologic Services (OAOS), including access and utilization of certain parts of our Ontologic System Components (OSC), Ontoverse (Ov), Ontoscope (Os), etc..
If one is unable to sign, pay, and comply, then it is the own problem.
See once again the notes
% + %, % OAOS, % HW of the 28th of February 2024,
SOPR does not accept illegal infrastructures of today,
and the other publications related to the subject matter.
By the way:
The publication and distribution of the first version of The Proposal by a known scientist at the university in 1999 was not authorized by C.S.. He just did it.
Eventually, it was out in the wild with our copyright, because the work of art has been written on our computer and printed with our printer in our rooms.
10:34 and 11:45 UTC+1
SOPR considering next LM
License Model (LM)
We give no discounts, because the preconditions have not been met and where preconditions have been met to a certain extent, then it has been done in a way, which is against us as well.
We also have no reason for doing so.
Furthermore, the time for related actions is up, because the situation changed over time.
Please note once again that we have created multiple things, parts, fields, etc., including our Evolutionary operating system (Evoos) with its Evolutionary operating system Architecture (EosA), and coherent Ontologic Model (OM), our Ontologic System (OS) with its Ontologic System Architecture (OSA), Ontologic System Components (OSC), and Ontoverse (Ov), and what is wrongly called Cloud-native technologies (Cnx) by others and Cloud 3.0 by us only for better understanding, Decentralized Web (DWeb) and Web3, eXtended Mixed Reality (XMR) or simply eXtended Reality (XR), Multiverse (Mv), Foundation Model (FM), general transducer (e.g. perceiver) model, Large Language Model (LLM), and Large MultiModal Model (LMMM), what is wrongly called generative Artificial Intelligence, and much more, and therefore Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) royalties are a cumulative (increased by successive additions) amount, which could be very high.
The classification in BasicOAOS, MidOAOS, and SuperOAOS and licensee classes remains, but we are also considering to add another dimension along these multiple parts respectively concerns, topics, etc..
Compare this with a series of novells, music albums, movies, etc.. One pays for each single novell, movie, etc.. So we do as well and adjust the individual royalties accordingly. It is not all you can eat in addition to the essential facilities.
Who wants all, pays for all.
By the way:
We will make one step after the other beginning with the damage compensations.
Every entity, that shows goodwill, can submit a honest proposal for any alternative procedure to reach our goals. We already showed the legal scope and spectrum for doing so.
13:11 UTC+1
Clarification Cloud 3.0 'R' Us #3
We quote the document titled "CNCF Overview" and publicized by the Cloud Native Computing Foundation (CNCF) in 2020: "[...]
Cloud Native Computing Foundation
Nonprofit, part of the Linux Foundation; founded Dec. 2015
[...]
[...]
From Virtualization to Cloud Native
[Logos:] Cloud Native Computing Foundation [] Kubernetes
Cloud native computing uses an open source software stack to:
- segment applications into microservices,
- package each part into its own container
- and dynamically orchestrate those containers to optimize resource utilization
[Graphic: A Brief History of the Cloud described in detail on following slides]
[...]
Serverless in CNCF
Decomposing Serverless
Serverless Working Group published an influential whitepaper
Attributes that developers love about closed serverless platforms (which already run on containers):
- Infinite scalability
- Microbilling
- Easy app updates
- Event-driven architectures [(EDAs)]
- Zero server ops
Several projects are decomposing these into features to be available on top of Kubernetes
Serverless Landscape & CloudEvents
[...]
CloudEvents, a new CNCF project, is a common model for event data to ease cross-provider event delivery
CNCF Cloud Native Definition v1.0
Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.
These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.
The Cloud Native Computing Foundation seeks to drive adoption of this paradigm by fostering and sustaining an ecosystem of open source, vendor-neutral projects. We democratize state-of-the-art patterns to make these innovations accessible for everyone.
[...]
A Brief History of the Cloud
["Non-Virtualized Hardware"] Non-Virtualized Servers: Sun [Microsystems] (2000)
Launching a new application? Buy a new server; or a rack of them!
Building block of your application is physical servers
Virtualization: VMWare (2001)
Releases for server market in 2001
Popularizes virtual machines (VMs)
Run many VMs on one physical machine, meaning you can buy fewer servers!
Architectural building block becomes a VM
IaaS: AWS (2006)
Amazon Web Services (AWS) creates the Infrastructure-as-a-Service market by launching Elastic Compute Cloud (EC2) in 2006
Rent servers by the hour
Convert CapEx to OpEx
Architectural building block is also a VM, called an Amazon Machine Image (AMI)
[PaaS, Developer Cloud Platform: Docker (2008)
internal project within dotCloud and funded by Central Intelligence Agency (CIA)→In-Q-Tel
"One home for all your apps"
set of PaaS products for software application deployment automation]
PaaS: Heroku (2009))
Heroku popularizes Platform-as-a-Service (PaaS) with their launch in 2009
Building block is a buildpack, which enables containerized 12-factor applications
- The process for building the container is opaque, but:
- Deploying new version of an app is just: git push heroku
Open Source IaaS: OpenStack (2010)
OpenStack brings together an extraordinarily diverse group of vendors to create an open source Infrastructure-as-a-Service (IaaS)
Competes with AWS and VMWare
Building block remains a VM
Open Source PaaS: Cloud Foundry ([2009,] 2011)
[VMware,] Pivotal [Sotfware (EMC, VMware, and General Electric), Project B29] builds an open source alternative to Heroku's PaaS and launches the Cloud Foundry Foundation in late 2014
Building block is Garden containers, which can hold Heroku buildpacks, Docker containers and even non-Linux OSes
Containers: Docker [(see also above)] ([2010, 2011,] 2013)
Docker combines [Linux Containers (]LXC[)], A Union File System [(UFS)] and [control groups (]cgroups[)] to create a containerization standard adopted by millions of developers around the world
Fastest uptake of a developer technology ever
Enables isolation, reuse and immutability
Cloud Native: ["Cloud Native Computing Fountation (]CNCF[) [] Kubernetes"] (2015)
Cloud native computing uses an open source software stack to:
- segment applications into microservices,
- package each part into its own container
- and dynamically orchestrate those containers to optimize resource utilization
What Have We Learned?
Core Building Block:
- Servers → Virtual Machines → Buildpacks → Containers
Isolation Units
- From heavier to lighter weight, in spin-up time and size
Immutability
- From pets to cattle
Provider
- From closed source, single vendor to open source, cross-vendor
What About PaaS?
OpenShift, Huawei CCE, and Flynn are examples of PaaS's built on top of cloud native platforms
Many new applications start out as 12-factor apps deployable on a PaaS
- In time they sometimes outgrow PaaS
- And some apps never fit a PaaS model
PaaS on top of Kubernetes supports both" [quote added on the 7th of May 2024]
Comment
We consider the definition of the CNCF as a copyright infringement.
See also the other comments below.
We also took a quick look on the document titled "Architecting Cloud-Native .NET Apps for Azure" and publicized by the company Microsoft around 2020: "Let's start with a simple definition:
Cloud-native architecture and technologies are an approach to designing, constructing, and operating workloads that are built in the cloud and take full advantage of the cloud computing model.
The Cloud Native Computing Foundation [(CNCF)] provides the official illegal definition:
[See the section CNCF Cloud Native Definition v1.0 of the document titled "CNCF Overview" and quoted above.]
Cloud native is about speed and agility. Business systems are evolving from enabling business capabilities to weapons of strategic transformation [...].
At the same time, business systems have also become increasingly complex with users demanding more. [...] Cloud-native systems are designed to embrace rapid change, large scale, and resilience."
"Figure 1-10. Strategies for migrating legacy workloads
The Journey to the Cloud
On-Premises [] Infrastructure Platform
[1.] Existing apps & services
.NET Framework [] on-premises
IaaS [] "Lift & shift" [] No code changes
[2.] Cloud Infrastructure-Ready
apps
Rehost [arrow from 1. to 2.]
Containers & PaaS [] Minimal code changes
[3.] Cloud Optimized
apps
Containerize + cloud managed services [arrows from 1. and 2. to 3.]
Microservices Architecture & Services [] Architected for the cloud [] modernized/rewrite
[4.] Cloud-Native
apps
Reachitect/Rebuild [arrows from 1., 2., and 3. to 4.]
Migrate [2.]
Modernize [3. and 4.]"
Comment
We also took a quick look on 1 smaller graphic publicized by the company Vodafone: "Cloud-Native Journey
Legacy
Constraints
Proprietary hardware requirements
Reliance on hardware-level [Service Level Agreement ([SLA[)]
Licenses tied to infrastructure or peak usage
Virtualised
All components virtualised on commodity hardware
No reliance on HW SLA - moves to hypervisor
Partially automated app build & run
Manual DR
Cloud Ready
As per virtualised, PLUS:
Fully automated app build & run
No reliance on OS-level SLA
API-centric
Redeploy instead of restore
Infrastructure as code via API
Cloud Native
As per cloud-ready, PLUS:
100% API driven
Stateless [Serverless and Function as a Service (FaaS), per legacy uses REpresentational State Transfer (REST)]
Self-healing
Auto-scaling [Serverless and FaaS, also done with per virtualised]
Separate metering and billing [also done with per virtualised]
Self-discovering
Rich, tailored telemetry
OR:
Use [Software as a Service (]SaaS[)] Solution" [quote added on the 6th of May 2024]
Comment
The title and the content of the first chapter of the second quoted document with the figures 1-1 to 1-10 Strategies for migrating legacy workloads - The Journey to the Cloud Ontologic System and Ontoverse, and also the additional excerpts from the other quoted documents are already sufficient, because they are well-architectured, insightful, and show nicely, what we have worked out as well, specifically the evolution of those plagiarisms and fakes from
{better temporal order (?!)}
operating system (os), and Interconnected network (Internet), and Hypertext Transfer Protocol (HTTP)
over
Distributed operating system (Dos), Kernel-Less operating system (KLos), and Agent-Based operating system (ABos), and microService technologies (mSx) (e.g microService-Oriented technologies (mSOx)), and
and from
World Wide Web (WWW), and REpresentational State Transfer (REST), and Web Services (WS)
over
microService-Oriented technologies (mSOx) and other Service-Oriented technologies (SOx), and Rempote Procedure Call (RPC)
to
ServerLess technologies (SLx) and Cloud-native technologies (Cnx), and Interconnected supercomputer (Intersup) (keywords actor-based, resilient, Distributed operating system (Dos), runtime, environment, and integrating architecture, and also automation and Autonomic Computing (AC)).
Also recall that the term polyglot is a synonym for the term multilingual.
See the related messages, notes, explanations, clarifications, investigations, and claims, specifically
Clarification Cloud 3.0 'R' Us as well of the 16th of June 2023,
Microsoft confirmed our claims once again of the 31st of January 2024,
Microsoft 90% + 10% of the 25th of February 2024,
DCos, CnC, aaSx, SDN, SD-WAN, ON, OW, OV, etc. no license of the 27th of February 2024,
Unix chroot jail before red line of the 16th of March 2024, and
Clarification Cloud 3.0 'R' Us #2 Just more 'R' Us of the 28th of February 2024.
We also note that the emphasize of the API in relation to cloud ready and cloud-native somehow reminds us of the legal case Oracle vs. Google and its incomprehensible ruling by the U.S.American supreme court in realtion otegh API of the Java programming language and the Java Virtual Machine (JVM). Indeed, we have the impression that this ruling is also targeting at our Ontologic System (OS), which would also explain that Oracle showed no outrage about it.
13:11 UTC+1
% + %, % OAOS, % HW #3
In fact, they all thought that nothing would follow in the fields of operating system (os), Interconnected network (Internet), World Wide Web (WWW), Artificial Intelligence (AI), Machine Learning (ML), Artificial Neural Network (ANN), Mixed Reality (MR), and so on until C.S. came and created. And after around 20 years they suddenly (not really) arrived, where we already are all the time, and we have several revolutions and even something totally or entirely new, as envisioned in 1998 and presented in 1999 and 2006.
Our moral rights, copyrights, competition rights, etc. need no further discussion. We will not go home with empty hands, which means we enjoy a legal monopoly and exclusive exploitation rights. And we do know, that they do know.
Ultimately, we have already totally nailed down the company Microsoft and the other companies, those fraudulent and even serious crimial organizations, foundations, and other entities, and do not need to submit 3 million pages to the courts to prove the facts and our rights and properties.
Their attempt to go all in has failed, as we said several times in the past, and resulted in a giant heap of bull$#!+ and we will not pay for cleaning up that mess. And that understanding of democracy is messy and is irrelevant for us, because the law is already the compromise.
Therefore, our estimations of the ratios of company shares, including share capital or capital stocks, including voting and prefered shares, or common and preferred stocks, are for example
Us:Microsoft 90%:10% and
Us:Alphabet (Google) 88%:12%,
and our other estimations of such ratios are quite precise and realistic as well, and the exact ratios will be approved by the apportionment analysis of external consulters on the basis of standard industry-proven examination and assessment methods.
We will add to the letter before action a standard letter of declaration, which responsible managers and members of the boards can sign and confirm that our Evolutionary operating system (Evoos) and our Ontologic System (OS) have been taken as sources of inspiration and blueprints for cloud, multiverse, generative Artificial Intelligence (AI), etc., and that causal links with their technologies, goods, and services exist, as we have shown, and therefore our moral rights and Lanham (Trademark) rights have been infringed.
They have to confirm it anyway, because we have shown that they all have done so alone and in collaboration, but not in the course of a
seamless technological development,
mandated steps of an ordinary technological progress, or
technical benefit for the society, or
combination of prior art in a bottom-up approach, or
required actions in a legal competition, or
essential facilities for a fair competition, or
societal benefit pro bono publico==for the public good.
But we are not sure, if that works out-of-court, because there are just too many legal big problems.
See also once again the Comment of the Day of the 28th of January 2024.
We also noticed (once again) that Free and Open Source Software (FOSS) is incompatible with moral rights and Lanham (Trademark) rights in general. Creators have to be referenced and give their allowance when and where a performance, reproduction, presentation, and so on takes place. But in case of C.S. the FOSS actors and communities have refused to do so all the time and as a reaction we do not allow anything anymore, and instead enjoy and enforce our exclusive exploitation rights, demand the payment of damage compensations, the transfer of all illegal materials, and all the other actions.
We also noticed (once again) that they gave away their implementations of our Evoos and our OS as FOSS to play out their quantities (e.g. size and market power) against our qualities (e.g. creativity and intellect). But all laws are drafted in ways, that
prohibit exactly that strategy and attitude on the one side and
guarantee complete protection and exclusive exploitation as reward for creators and inventors on the other side,
so that creations and innovations happen at all.
And this is only 1 of around 30 illegal dirty tricks, which we have documented for legal actions and will result in around 100 different charges.
What is left to steal? They already have stolen
operating system Virtual Machine (osVM),
os-level Virtualization (osV),
message passing,
Master-Worker pattern,
Agent-Based System (ABS) and Agent-Oriented Programming (AOP),
Key-Value (K-V) store,
virtualized container,
Actor Model (AM), and
multilingual or polyglot runtime, as well as
automation,
Autonomic Computing (AC),
and so on,
and they want more
Artificial Intelligence (AI), for example
- coherent Ontologic Model (OM) and
- Network Artificial Intelligence (NetAI).
Therefore,
Tuple Space System (TSS) (if not already stolen with Kubernetes),
Distributed Artificial Intelligence (AI),
Multi-Agent System (MAS) (if not already stolen with Kubernetes, etc. and Kata Containers, Firecracker, etc.),
Cognitive Agent System (CAS), and
Global Brain (GB), and also
Multiverse (Mv),
eXtended Mixed Reality (XMR) or eXtended Reality (XR),
and so on
are missing and thus are the next consequent development steps in addition to the rest of our Evoos and our OS, and also our corporation.
08:54 UTC+1
Banks, SWFs, IFs, and Co. are next
Sovereign Wealth Fund (SWF)
Investment Fund (IF)
Goldman Sachs, Deutsche Bank, and some more banks are on our watchlist since ever. JPMorgan and other banks have been added to our lists as well.
About Blackstone, Sequioa Capital, and so on we do not need to talk about as well.
What has been added as well are national actors. They already know in Arabia and Norway should also not fall prey to any illusions.
We are considering to blacklist all companies, which got an investment by fraudulent or even serious criminal banks, SWFs, IFs, etc., if they have not payed damage compensations.
In this respect, entrepreneurs have to be very careful with what they do in this relation as well.
10:33 UTC+1
Ontolab Further steps
We are close before an operational Caliber/Calibre.
We quote a shortened chapter of a book, which took our Ontologic System (OS) and our Caliber/Calibre as source of inspiration and blueprint: "
Que sera[,] sera
Whatever will be[,] will be
The future's not ours to see
Que sera[,] sera.
So sang Doris Day in 1956 [...]
[...]
Aristotle formulated the openness of the future in the language of logic. Living in Athens at a time when invasion from the sea was always a possibility, he made his argument using the following sentence: 'There will be a sea-battle tomorrow.' One of the classical laws of logic is the 'law of the excluded middle' which states that every sentence is either true or false: either the sentence is true or its negation is true. But Aristotle argued that neither 'There will be a sea-battle tomorrow' nor 'There will not be a sea-battle tomorrow' is definitely true, for both possibilities lead to fatalism; if the first statement is true, for example, there would be nothing anybody could do to avert the sea-battle. Therefore, these statements belong to a third logical category, neither true nor false. In modern times, this conclusion has been realised in the development of many-valued logic.
[...]
The success of modern science gave rise to the idea that this is always true: not knowing the future can always be traced back to not knowing something about the present. As more and more phenomena came under the sway of the laws of physics, so that more and more events could be explained as being caused by previous events, so confidence grew that every future event could be predicted with certainty, given enough knowledge of the present. The most famous statement of this confidence was made by the French mathematician Pierre-Simon Laplace in 1814:
["]We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.["]
This idea goes back to Isaac Newton, who in 1687 had a dream:
["]I wish we could derive the rest of the phenomena of Nature by the same kind of reasoning from mechanical principles, for I am induced by many reasons to suspect that they may all depend upon certain forces by which the particles of bodies, by some causes hitherto unknown, are either mutually impelled towards one another, and cohere in regular figures, or are repelled and recede from one another.["]
In this view, everything in the world is made up of point particles, and their behaviour is explained by the action of forces that make the particles move according to Newton's equations of motion. These completely determine the future motion of the particles if their positions and velocities are given at any one instant; the theory is deterministic. So if we fail to know the future, that is purely because we do not know enough about the present.
[...]
Well, it was a nice dream. But it didn't work out that way. In the early years of the 20th century, Ernest Rutherford, investigating the recently discovered phenomenon of radioactivity, realised that it showed random events happening at a fundamental level of matter, in the atom and its nucleus. This did not necessarily mean that Newton's dream had to be abandoned: the nucleus is not the most fundamental level of matter, but is a complicated object made up of protons and neutrons, and - maybe - if we knew exactly how these particles were situated and how they were moving, we would be able to predict when the radioactive decay of the nucleus would happen. But other, stranger discoveries at around the same time led to the radical departure from Newtonian physics represented by quantum mechanics, which strongly reinforced the view that events at the smallest scale are indeed random, and there is no possibility of precisely knowing the future."
Comment
Well, Fuzzy Logic (FL), Computational Complexity Theory (CCT), Algorithmic Information Theory (AIT), "Deterministic Chaos", etc., etc., etc..
See also the
Clarification of the 28th of April 2016,
Short summary of clarification of the 11th of June 2022,
and the other publications cited therein.
The dream of Isaac Newton and the vision of Pierre-Simon Laplace could soon be fulfilled.
By the way:
Copyright infringement by the authors of the quoted work, who took our Ontologic System (OS) with our Caliber/Calibre as source of inspiration and blueprint without referencing.
And C.S. is so successful in everything, because we simply throw away all that important scientifical and political blah blah blah, save all that criminal energy required to get the properties of others, and just do it right and make an own thing.
Ontology, Ontologics, Stars do not lie.
15:49 and 23:47 UTC+1
Ontonics Further steps
We have begun to clean up some inconsistencies on our website OntomaX regarding terms and definitions, but also explanations and clarifications.
We also continued to collect legally relevant materials and sift through and organize the materials, which have already been collected.
Both activities are related with the finalization of the set of legal documents.
After that, it's just a matter of writing beautifully.
As we already said in the past, the rights and properties of C.S. and our corporation are not negotiable. Specifically, the moral rights and Lanham (Trademark) rights of C.S. are not contestable anymore due to the quantity and quality of the plagiarisms and fakes, and conspiracies and plots (the same industries and the same companies have stolen the same original and unique, unforeseeable and unexpected, personal, revolutionary and iconic expressions of idea at the same time by the same fraudulent and even serious criminal actions from the same creator alone and in collaboration), which means the confirmation of the copyrights for the related original and unique works of art, and the exploitation rights, modification rights, first presentation rights, naming rights, and other rights, which implies the justification of the damage compensations, which are the higher of the apportioned
- triple damage compensations induced, resulting from
- unpaid royalties for unauthorized performances and reproductions,
- obmitted referencing respectively citation with attribution, and
- thwarted, obstructed, blocked, and otherwise missed commercial business possibilities and follow-up opportunities,
- profit generated illegally, or
- value (e.g. share price, market capitalization) increased or gained illegally
by
- performing and reproducing our Evolutionary operating system (Evoos) and our Ontologic System (OS) in whole or in part without authorization respectively allowance and license, and
- interfering with, and also obstructing, undermining, and harming the exclusive moral rights respectively Lanham (Trademark) rights of C.S and our corporation.
We also noticed that a certain consensus regarding the maximal royalities for Ontologic Applications and Ontologic Services (OAOS) of the Information and Communication Technology (ICT) licensee class exists with 17% (see also the note SOPR considering next LM of the 22nd of March 2024), but this depends on several factors and no InfrastructureaaSx, PlatformaaSx, BackendaaSx, FunctionaaSx, SoftBionicsaaSx, eventually what is wrongly called Cloud-native Computing (CnC) or simply cloud, generative AI, personal assistant, chatbot, and other foundational and essential parts of our Ontologic System (OS), which will be provided by us with the exclusive and mandatory infrastructures of our Society for Ontological Performance and Reproduction (SOPR) and our other Societies.
But the whole matter does not need further elaboration, because everything, that can be said, has been said.
06:00 and 18:11 UTC+1
Investigations::Multimedia, AI and KM
We quote an interview of a plagiarist by another plagiarist, which is about our essential parts of our Ontologic System (OS) and was publicized on the 18th of March 2024 (01:55:00): "OpenAl, GPT-5, Sora, Board Saga, [plagiarist 3], Ilya, Power & [Artificial General Intelligence [(]AGI[)]
[Plagiarist 1, Sam Altman:] I think compute is going to be the currency of the future. I think it'll be maybe the most precious commodity in the world. [...] The road to AGI should be a giant power struggle. I expect that to be the case. [We always say that one can never have enough of 2 things: (electric) energy and bandwith, so to say two different kinds of power. See also the Comment of the Day of the 2nd of December 2019.]
[Plagiarist 2, Moderator:] Whoever builds AGI first gets a lot of power. Do you trust yourself with that much power? [We already said several years ago that our OS already includes AGI, which is also one of the many reasons, why they all are stealing it and why we are taking it back by executing our powers and exlusive rights (e.g. moral rights respectively Lanham (Trademark) rights) (e.g. exploitation (e.g. commercialization (e.g. monetization))) and properties (e.g. copyrights, raw signals and data, digital and virtual assets, and online advertisement estate), which grant us a legal, because innocent monopoly. Furthermore, the world cannot trust notorious liars, kleptomaniacs, and serious criminals, specifically not in this case of our Evoos and our OS.]
[...]
[Plagiarist 1, Sam Altman:] [...] I thought at some point between when OpenAl started and when we created AGI, there was going to be something crazy and explosive that happened, but there may be more crazy and explosive things still to happen. [...] [Why is he speaking in the past tense "when we created AI, there was going"?]
[Plagiarist 2, Moderator:] But the thing you had a sense that you would experience is some kind of power struggle?
[Plagiarist 1, Sam Altman:] The road to AGI should be a giant power struggle. The world should ...Well, not should. I expect that to be the case. [We do not think so, because the law is already the compromise, and we have the rights and properties, so that this situation does not exist at all.]
[...]
[Plagiarist 1, Sam Altman:] [...] And I think that it is valuable that this [drama with the board of OpenAI] happened now in some sense. I think this is probably not the last high-stress moment of OpenAl, but it was quite a high-stress moment. My company very nearly got destroyed. And we think a lot about many of the other things we've got to get right for AGI, but thinking about how to build a resilient org and how to build a structure that will stand up to a lot of pressure in the world, which I expect more and more as we get closer, I think that's super important. [He is also a typical self-exposer and busybody. We also note resilient structure, which reminds us of our resilient OS and the exclusive and mandatory infrastructures of our SOPR and our other Societies.]
[...]
[Plagiarist 1, Sam Altman:] Look, I think you definitely need some technical experts there. And then you need some people who are like, "How can we deploy this in a way that will help people in the world the most?" And people who have a very different perspective. I think a mistake that you or I might make is to think that only the technical understanding matters, and that's definitely part of the conversation you want that board to have, but there's a lot more about how that's going to just impact society and people's lives that you really want represented in there too. [He really thinks that he is that ingenious. Better come down on earth again as quick as possible.]
[...]
[Plagiarist 1, Sam Altman:] [...] I was just in the middle of this firefight, [...]. It was like a battle fought in public to a surprising degree [...].
[...]
[Plagiarist 1, Sam Altman:] [...] You learn a weird thing about adrenaline in wartime. [Despite of the firefight, battle, fight, and wartime, he will never be our hero. We recommend to get a good psychologist or other professional help.]
[...]
[Plagiarist 1, Sam Altman:] [...] I really care about the people here, the partners, shareholders. I love this company. [...] [But most of all, he loves himself above all by far.]
[...]
[Plagiarist 2, Moderator:] There's a meme that he saw something, like he maybe saw AGI and that gave him a lot of worry internally. What did Ilya [Sutskever] see?
[Plagiarist 1, Sam Altman:] Ilya has not seen AGI. None of us have seen AGI. We've not built AGI. I do think one of the many things that I really love about Ilya is he takes AGI and the safety concerns, broadly speaking, including things like the impact this is going to have on society, very seriously. And as we continue to make significant progress, Ilya is one of the people that I've spent the most time over the last couple of years talking about what this is going to mean, what we need to do to ensure we get it right, to ensure that we succeed at the mission. So Ilya did not see AGI, but Ilya is a credit to humanity in terms of how much he thinks and worries about making sure we get this right. [For sure, they have seen AGI, because they have nothing else to do since more than 8 years than stealing our OS. And Ilya is just only another dilettante, who does not really know, what they are doing, as we have already proven and are proving with this investigation once again.]
[Plagiarist 2, Moderator:] I've had a bunch of conversation with him in the past. I think when he talks about technology, he's always doing this long-term thinking type of thing. So he is not thinking about what this is going to be in a year. He's thinking about in 10 years, just thinking from first principles like, "Okay, if this scales, what are the fundamentals here? Where's this going?" And so that's a foundation for them thinking about all the other safety concerns and all that kind of stuff, which makes him a really fascinating human to talk with. [...] [Unbelievable how those chatterboxes adulate totally ordinary matters. There must be something in the air, water, or food, or their dealers are selling bad stuff.]
[...]
[Plagiarist 1, Sam Altman:] [...] I think it was a perfect storm of weirdness. It was a preview for me of what's going to happen as the stakes get higher and higher and the need that we have robust governance structures and processes and people. [...] [Wake us up, when everything calmed down again. For so long, we go into the light, because there are fewer waves there.]
[...]
[Plagiarist 1, Sam Altman:] [...] I don't know what [the lawsuit of plagiarist 3 agianst OpenAI is] really about. We started off just thinking we were going to be a research lab and having no idea about how this technology was going to go. Because it was only seven or eight years ago, it's hard to go back and really remember what it was like then, but this is before language models were a big deal. This was before we had any idea about an API or selling access to a chatbot. It was before we had any idea we were going to productize at all. So we're like, "We're just going to try to do research and we don't really know what we're going to do with that." [...] And it doesn't mean I wouldn't do it totally differently if we could go back now with an Oracle, but you don't get the Oracle at the time. [...] [Guess why we are documenting and investigating everything, which is going on in relation to our Evoos and our OS, as we are doing here now as well. So, if he or others want to recall, what they thought and did around 7 or 8 years ago, then just look at what we have publicized on this website of OntomaX. It crystal clearly shows that their intention was and still is "We must steal that OS and get it under our control." Eventually, no oracle was required respectively only our vision and our presentations of the creations of C.S.. We also note that our claim has been confirmed that they have established a a not-for-profit venture, group, foundation, consortium, or company respectively Non-Profit Organization (NPO) with the intention to produce a plagiarism and fake of our Evoos and our OS. That is quite a dirty trick and even looks like a fraud. We also note that our claim has been confirmed that our coherent Ontologic Model (OM), including our MultiModal Artificial Neural Network (MMANN), what is wrongly called Foundation Model (FM), and LLM, and our generative and creative Bionics were original and unique, unforeseeable and unexpected by an expert in the related fields respectively a Person of Ordinary Skill In The Art (POSITA) at the time of their creation by C.S.. Everything is court-proof, as always.]
[...]
[Plagiarist 2, Moderator:] I do think there's a degree of mischaracterization from [plagiarist 3] here about one of the points you just made, which is the degree of uncertainty you had at the time. You guys are a small group of researchers crazily talking about AGI when everybody's laughing at that thought. [There was no uncertainty, but just our original and unique works of art created by C.S.. Furthermore, not everybody was laughing about AGI at that time, as we have also documented on this website, because the other entities also have nothing else to do since even around 23 to 26 years than to steal our Evoos and our OS. In fact, OpenAI came a little late to the party.]
[...]
[Plagiarist 1, Sam Altman:] [Plagiarist 3] thought OpenAl was going to fail. He wanted total control to turn it around. We wanted to keep going in the direction that now has become OpenAl. He also wanted Tesla to be able to build an AGI effort. At various times, he wanted to make OpenAl into a for-profit company that he could have control of or have it merge with Tesla. We didn't want to do that, and he decided to leave, which that's fine.
[Plagiarist 2, Moderator:] So you're saying, and that's one of the things that the blog post says, is that he wanted OpenAl to be basically acquired by Tesla in the same way that, or maybe something similar or maybe something more dramatic than the partnership with Microsoft.
[Plagiarist 1, Sam Altman:] My memory is the proposal was just like, yeah, get acquired by Tesla and have Tesla have full control over it. I'm pretty sure that's what it was. [We refer to the note OpenAI is just many things it should not be of the 6th of March 2024 and the concluding comment after these quotes.]
[...]
[Plagiarist 1, Sam Altman:] Speaking of going back with an Oracle, I'd pick a different name. One of the things that I think OpenAl is doing that is the most important of everything that we're doing is putting powerful technology in the hands of people for free, as a public good. We don't run ads on our as a public good. We don't run ads on our free version. We don't monetize it in other ways. We just say it's part of our mission. We want to put increasingly powerful tools in the hands of people for free and get them to use them. I think that kind of open is really important to our mission. I think if you give people great tools and teach them to use them or don't even teach them, they'll figure it out, and let them go build an incredible future for each other with that, that's a big deal. So if we can keep putting free or low cost or free and low cost powerful Al tools out in the world, I think that's a huge deal for how we fulfill the mission. Open source or not, yeah, I think we should open source some stuff and not other stuff. It does become this religious battle line where nuance is hard to have, but I think nuance is the right answer. [So, so, the mission of OpenAI is to give away for free, what has been stolen from us, and mimick C.S. and our corporation. Computer says: No! And about Free and Open Source (FOSS) we do not discuss anymore. FOSS was never intended for the brand new things, which even are protected by copyright or patent right, but for the standard software and hardware, which just lacked alternative implementation, distribution, and maintenance. The rest is just marketing and fraud to substitute quality by smaller entities with quantity by bigger entities.]
[...]
[Plagiarist 2, Moderator:] Maybe correct me if I'm wrong, but I don't think the lawsuit is legally serious. It's more to make a point about the future of AGI and the company that's currently leading the way. [We already showed multiple times that OpenAI is merely implementing plagiarisms and fakes of our Evoos and our OS, and mimicking C.S. and our corporation. So much about competent and trustworthy leadership.]
[Plagiarist 1, Sam Altman:] Look, I mean [another plagiarism and fake of our Ontologic roBot (OntoBot)] had not open sourced anything until people pointed out it was a little bit hypocritical and then he announced that [another plagiarism and fake of our Ontologic roBot (OntoBot)] will open source things this week. I don't think open source versus not is what this is really about for him.
[...]
[Plagiarist 2, Moderator:] Yeah, he's one of the greatest builders of all time, potentially the greatest builder of all time. [Oh, what a [net zero cursing]. Howsoever, C.S. would never sell an unfinished product, which is killing people, specifically not for the reason to keep a hidden Ponzi scheme running.]
[...]
[Plagiarist 2, Moderator:] [...] But on the question of open source, do you think there's a lot of companies playing with this idea? It's quite interesting. I would say Meta surprisingly has led the way on this, or at least took the first step in the game of chess of really open sourcing the model. Of course it's not the state-of-the-art model, but open sourcing Llama Google is flirting with the idea of open sourcing a smaller version. What are the pros and cons of open sourcing? Have you played around with this idea?
[Plagiarist 1, Sam Altman:] Yeah, I think there is definitely a place for open source models, particularly smaller models that people can run locally, I think there's huge demand for. I think there will be some open source models, there will be some closed source models. It won't be unlike other ecosystems in that way. [That is nonsense. Giving a brand new product or service away for free is just extraordinary silly. This implies that another reason for doing so must exist and we have already explained why the companies Meta (Facebook), Alphabet (Google), Microsoft, and Co. are supporting FOSS: What they are giving away in this field and some other fields, does not belong to them at all, but us, and what they are doing is merely damaging, diluting, and devaluing, and even destroying the identities, authenticities, integrities, reputations, and momenta, as well as follow-up opportunities, works, and achievements of C.S. and our corporation. Of course, court-proof, too.]
[...]
[Plagiarist 1, Sam Altman:] I would heavily discourage any startup that was thinking about starting as a nonprofit and adding a for-profit arm later. I'd heavily discourage them from doing that. I don't think we'll set a precedent here. [That is common practice with the spin-offs at the universities and other research institutes worldwide, which are often funded by the governments but also the industries.]
[Plagiarist 2, Moderator:] Okay. So most startups should go just
[Plagiarist 1, Sam Altman:] For sure.
[...]
[Plagiarist 1, Sam Altman:] If we knew what was going to happen, we would've done that too. [Also [net zero cursing]. The fact is that OpenAI would not exist at all anymore, because that was their trick to get worldwide publicity rather quickly and to misuse that non-profit status for their other activities and benefits.]
[...]
[Plagiarist 2, Moderator:] So speaking of cool [$#!+], Sora. [...] It truly is amazing on a product level but also just on a philosophical level. So let me just technical/philosophical ask, what do you think it understands about the world more or less than GPT-4 for example? The world model when you train on these patches versus language tokens. [Philosophy, world model, language, ... Bingo!!!]
[Plagiarist 1, Sam Altman:] I think all of these models understand something more about the world model than most of us give them credit for. And because they're also very clear things they just don't understand or don't get right, it's easy to look at the weaknesses, see through the veil and say, "Ah, this is all fake." But it's not all fake. It's just some of it works and some of it doesn't work.
I remember when I started first watching Sora videos and I would see a person walk in front of something for a few seconds and occlude it and then walk away and the same thing was still there. I was like, "Oh, this is pretty good." Or there's examples where the underlying physics looks so well represented over a lot of steps in a sequence, it's like, "Oh, this is quite impressive." But fundamentally, these models are just getting better and that will keep happening. [...] [We said directly after the start of Sora that it is just another plagiarism and fake of an essential part of our OS, because it is not about two-dimensional (2D) videos anymore, but our coherent Ontologic Model (OM), including MultiModal Artificial Neural Network (MMANN), what is wrongly called Foundation Model (FM), and LLM, our generative and creative Bionics, and also three-dimensional (3D) spaces, environments, worlds, and universes. See also the other comments related to Sora and our integration of our LLM with our eXtended Mixed Reality (XMR) or simply eXtended Reality (XR), and so on.]
[Plagiarist 2, Moderator:] Well, the thing you just mentioned is the occlusions is basically modeling the physics of the three-dimensional physics of the world sufficiently well to capture those kinds of things. [Bingo!!!]
Or yeah, maybe you can tell me, in order to deal with occlusions, what does the world model need to?
[Plagiarist 1, Sam Altman:] Yeah. So what I would say is it's doing something to deal with occlusions really well. What I represent that it has a great underlying 3D model of the world, it's a little bit more of a stretch. [Bingo!!!]
[Plagiarist 2, Moderator:] But can you get there through just these kinds of two-dimensional training data approaches?
[Plagiarist 1, Sam Altman:] It looks like this approach is going to go surprisingly far. I don't want to speculate too much about what limits it will surmount and which it won't, but ... [They have absolutely no clue, what they are doing all the time. They have seen something on our websites and want to steal it for making money. What we see at OpenAI and the other AI kiddies with their AI crap is just the beginning of the next AI winter for various reasons and therefore we already said some weeks ago that the party is over, because the old low hanging fruits have already been picked and were sold as brand new.]
[Plagiarist 2, Moderator:] What are some interesting limitations of the system that you've seen? I mean there's been some fun ones you've posted.
[Plagiarist 1, Sam Altman:] There's all kinds of fun. I mean, cat's sprouting an extra limit at random points in a video. Pick what you want, but there's still a lot of problem, there's a lot of weaknesses.
[Plagiarist 2, Moderator:] Do you think it's a fundamental flaw of the approach or is it just bigger model or better technical details or better data, more data is going to solve the cat sprouting.
[Plagiarist 1, Sam Altman:] I would say yes to both. I think there is something about the approach which just seems to feel different from how we think and learn and whatever. And then also I think it'll get better with scale. [They do not have enough fuel, which they can add to the fire, which is required to keep it burning and make it bigger, and it will not avoid the AI winter anyway. But at least, there is a lot of money to burn and keep the bubble intact. Maybe.]
[Plagiarist 2, Moderator:] Like I mentioned, LLMs have tokens, text tokens, and Sora has visual patches so it converts all visual data, a diverse kinds of visual data videos and images into patches. Is the training to the degree you can say fully self supervised, there's some manual labeling going on? What's the involvement of humans in all this?
[Plagiarist 1, Sam Altman:] I mean without saying anything specific about the Sora approach, we use lots of human data in our work. [Let us comment this in the following way: We are open for business.]
[Plagiarist 2, Moderator:] But not internet scale data? So lots of humans. Lots is a complicated word, Sam.
[Plagiarist 1, Sam Altman:] I think lots is a fair word in this case.
[Plagiarist 2, Moderator:] Because to me, "lots" ... Listen, I'm an introvert and when I hang out with three people, that's a lot of people. Four people, that's a lot. But I suppose you mean more than ...
[Plagiarist 1, Sam Altman:] More than three people work on labeling the data for these models, yeah.
[Plagiarist 2, Moderator:] Okay. Right. But fundamentally, there's a lot of self supervised learning. Because what you mentioned in the technical report is internet scale data. That's another beautiful ... It's like poetry. So it's a lot of data that's not human label. It's self supervised in that way?
[Plagiarist 1, Sam Altman:] Yeah.
[Plagiarist 2, Moderator:] And then the question is, how much data is there on the internet that could be used in this that is conducive to this kind of self supervised way if only we knew the details of the self supervised. Have you considered opening it up a little more details?
[Plagiarist 1, Sam Altman:] We have. You mean for source specifically?
[Plagiarist 2, Moderator:] Source specifically. Because it's so interesting that can the same magic of LLMs now start moving towards visual data and what does that take to do that? [This integration as a MultiModal Artificial Neural Network (MMANN) or what is wrongly called Foundation Model (FM), general transducer (e.g. perceiver) model, etc. is a copyright infringement.]
[Plagiarist 1, Sam Altman:] I mean it looks to me like yes, but we have more work to do.
[Plagiarist 2, Moderator:] Sure. What are the dangers? Why are you concerned about releasing the system? What are some possible dangers of this?
[Plagiarist 1, Sam Altman:] I mean frankly speaking, one thing we have to do before releasing the system is just get it to work at a level of efficiency that will deliver the scale people are going to want from this so that I don't want to downplay that. And there's still a ton ton of work to do there. But you can imagine issues with deepfakes, misinformation. We try to be a thoughtful company about what we put out into the world and it doesn't take much thought to think about the ways this can go badly. [No, it is exactly like with those killer cars of plagiarist 3, they are selling unfinished products and services.]
[Plagiarist 2, Moderator:] There's a lot of tough questions here, you're dealing in a very tough space. Do you think training Al should be or is fair use under copyright law?
[Plagiarist 1, Sam Altman:] I think the question behind that question is, do people who create valuable data deserve to have some way that they get compensated for use of it, and that I think the answer is yes. I don't know yet what the answer is. People have proposed a lot of different things. We've tried some different models. But if I'm like an artist for example, A, I would like to be able to opt out of people generating art in my style. And B, if they do generate art in my style, I'd like to have some economic model associated with that. [Sign, pay, and comply for performing and reproducing certain parts of our OS. But before doing this, pay a lot of damage compensations.]
[...]
[Plagiarist 2, Moderator:] Well, there should be some kind of incentive if we zoom out even more for humans to keep doing cool [$#!+]. [Sign, pay, and comply. But before, pay a lot of damage compensations.]
[Plagiarist 1, Sam Altman:] Of everything I worry about, humans are going to do cool [$#!+] and society is going to find some way to reward it. That seems pretty hardwired. We want to create, we want to be useful, we want to achieve status in whatever way. That's not going anywhere I don't think. [What planet does he live on? The last person with a similar character and a young company, who was infringing our rights and properties, and speaking such a nonsense, received a prison sentence of 25 years yesterday.]
[Plagiarist 2, Moderator:] But the reward might not be monetary financially. It might be fame and celebration of other cool
[Plagiarist 1, Sam Altman:] Maybe financial in some other way. Again, I don't think we've seen the last evolution of how the economic system's going to work.
[Plagiarist 2, Moderator:] Yeah, but artists and creators are worried. When they see Sora, they're like, "Holy [$#!+]."
[Plagiarist 1, Sam Altman:] Sure. Artists were also super worried when photography came out and then photography became a new art form and people made a lot of money taking pictures. I think things like that will keep happening. People will use the new tools in new ways. [Sign, pay, and comply. But before, pay a lot of damage compensations. This is also the reason why we have a collecting Society for Ontological Performance and Reproduction (SOPR).]
[Plagiarist 2, Moderator:] If we just look on YouTube or something like this, how much of that will be using Sora like Al generated content, do you think, in the next five years?
[Plagiarist 1, Sam Altman:] [...] The way I think about it is not what percent of jobs Al will do, but what percent of tasks will Al do on over one time horizon. So if you think of all of the five-second tasks in the economy, five minute tasks, the five-hour tasks, maybe even the five-day tasks, how many of those can Al do? I think that's a way more interesting, impactful, important question than how many jobs Al can do because it is a tool that will work at increasing levels of sophistication and over longer and longer time horizons for more and more tasks and let people operate at a higher level of abstraction. So maybe people are way more efficient at the job they do. And at some point that's not just a quantitative change, but it's a qualitative one too about the kinds of problems you can keep in your head. I think that for videos on YouTube it'll be the same. Many videos, maybe most of them, will use Al tools in the production, but they'll still be fundamentally driven by a person thinking about it, putting it together, doing parts of it. Sort of directing and running it. [Sign, pay, and comply. But before, pay a lot of damage compensations.]
[...]
[Plagiarist 2, Moderator:] Yeah. And maybe it'll just be tooling in the Adobe suite type of way where it can just make videos much easier and all that kind of stuff. [...] [A lot of talk to finally arrive at our OntoBlender.]
[...]
[Plagiarist 2, Moderator:] So allow me to ask, what's been the most impressive capabilities of GPT-4 to you and GPT-4 Turbo?
[Plagiarist 1, Sam Altman:] I think it kind of sucks.
[...]
[Plagiarist 2, Moderator:] What are the best things it can do and the limits of those best things that allow you to say it sucks, therefore gives you an inspiration and hope for the future?
[Plagiarist 1, Sam Altman:] One thing I've been using it for more recently is sort of like a brainstorming partner.
[...]
[Plagiarist 1, Sam Altman:] [...] there's something about the kind of creative brainstorming partner [...]
One of the other things that you can see a very small glimpse of is when I can help on longer horizon tasks, break down something in multiple steps, maybe execute some of those steps, search the internet, write code, whatever, put that together. When that works, which is not very often, it's very magical. [For this creative capability and other original and unique, unforeseeable and unexpected, personal, revolutionary and iconic properties and functionalities of the self-reflection, self-image, or self-portrait, and cybernetic reflection, augmentation, and extension of C.S. started and created with our Evoos and our OS we call it generative and creative Bionics and rightfully claim the copyright.]
[...]
[Plagiarist 1, Sam Altman:] Iterative back and forth to human, it can get more often when it can go do a 10 step problem on its own.
[...]
It doesn't work for that too often, sometimes.
[Plagiarist 2, Moderator:] Add multiple layers of abstraction or do you mean just sequential? [Now, they are talking about Cognitive Agent System (CAS) architecture, which is also included in our Evoos and our OS.]
[Plagiarist 1, Sam Altman:] Both, to break it down and then do things that different layers of abstraction to put them together. Look, I don't want to downplay the accomplishment of GPT-4, but I don't want to overstate it either. [...] [As true experts and we always explained, that is not Artificial Intelligence (AI), because they already did that in the 1990s, and now come the really hard lessons and problems.]
[Plagiarist 2, Moderator:] That said, I mean ChatGPT was a transition to where people started to believe there is an uptick of believing, not internally at OpenaAl. [Now, we are talking about a belief system.]
[...]
[Plagiarist 2, Moderator:] Perhaps there's believers here, but when you think of
[Plagiarist 1, Sam Altman:] And in that sense, I do think it'll be a moment where a lot of the world went from not believing to believing. That was more about the ChatGPT interface. And by the interface and product, I also mean the post training of the model [by Reinforcement Learning from Human Feedback (RLHF)] and how we tune it to be helpful to you and how to use it than the underlying model itself. [Feedback is related to the field of Cybernetics and human feedback is related to our Evoos and our OS. Bingo!!!]
[...]
[Plagiarist 2, Moderator:] How does the context window of going from 8K to 128K tokens compare from GPT-4 to GPT-4 Turbo?
[Plagiarist 1, Sam Altman:] Most people don't need all the way to 128 most of the time. Although if we dream into the distant future, we'll have way distant future, we'll have context length of several billion. You will feed in all of your information, all of your history over time and it'll just get to know you better and better and that'll be great. For now, the way people use these models, they're not doing that. People sometimes post in a paper or a significant fraction of a code repository, whatever, but most usage of the models is not using the long context most of the time. [Let us comment this in the following way: The party is over.]
[...]
[Plagiarist 2, Moderator:] [...] What are some interesting use cases of GPT-4 that you've seen?
[Plagiarist 1, Sam Altman:] The thing that I find most interesting is not any particular use case that we can talk about those, [...] but people who use it as their default start for any kind of knowledge work task. [...] The most interesting thing to me is the people who just use it as the start of their workflow. [Here we are once again at our OntoBlender.]
[Plagiarist 2, Moderator:] [...]
You mentioned this collaboration. I'm not sure where the magic is, if it's in here or if it's in there or if it's somewhere in between. I'm not sure. But one of the things that concerns me for knowledge task when I start with GPT is I'll usually have to do fact checking after, like check that it didn't come up with fake stuff. How do you figure that out that GPT can come up with fake stuff that sounds really convincing? So how do you ground it in truth? [What a surprise, we are talking about our OS on the one hand and the truly hard problems on the other hand once again.]
[Plagiarist 1, Sam Altman:] That's obviously an area of intense interest for us. I think it's going to get a lot better with upcoming versions, but we'll have to continue to work on it and we're not going to have it all solved this year. [For sure, it is of high interest, because strong AI based on logic, knowledge, reasoning, and so on is another essential part of our OS, which is stolen next.]
[Plagiarist 2, Moderator:] Well the scary thing is, as it gets better, you'll start not doing the fact checking more and more, right? [And down the drain that AI crap goes. Some companies already exit the party, because they had to spend too much time with fact checking. Have we already said that we are open for business?]
[...]
[Plagiarist 1, Sam Altman:] And people seem to really understand that GPT, any of these models hallucinate some of the time. And if it's mission-critical, you got to check it.
[Plagiarist 2, Moderator:] Except journalists don't seem to understand that. I've seen journalists halfassedly just using GPT-4. [Good luck with democracy.]
[...]
[Plagiarist 2, Moderator:] [...] You've given ChatGPT the ability to have memories. You've been playing with that about previous conversations. And also the ability to turn off memory. [...] What have you seen through that, like playing around with that idea of remembering conversations and not ... [Hopefully, it is needless to say that this was also stolen from our OS.]
[Plagiarist 1, Sam Altman:] We're very early in our explorations here, but I think what people want, or at least what I want for myself, is a model that gets to know me and gets more useful to me over time. This is an early exploration. I think there's a lot of other things to do, but that's where we'd like to head. You'd like to use a model, and over the course of your life or use a System, it'd be many models, and over the course of your life it gets better and better. [Bingo!!! Obviously, we are talking here about self-reflection, self-image, or self-portrait, and cybernetic reflection, augmentation, and extension, and also a features of the OntoLogger component of our OS.]
[Plagiarist 1, Sam Altman:] It's not just that I want it to remember that. I want it to integrate the lessons of that and remind me in the future what to do differently or what to watch out for. We all gain from experience over the course of our lives in varying degrees, and I'd like my Al agent to gain with that experience too. So if we go back and let ourselves imagine that trillions and trillions of context length, if I can put every conversation I've ever had with anybody in my life in there, if I can have all of my emails input out, all of my input output in the context window every time I ask a question, that'd be pretty cool I think. [See the comment to the quote before. And we add that this is the next evidence for a copyright infringement.]
[Plagiarist 2, Moderator:] Yeah, I think that would be very cool. People sometimes will hear that and be concerned about privacy. What do you think about that aspect of it, the more effective the Al becomes that really integrating all the experiences and all the data that happened to you and give you advice?
[Plagiarist 1, Sam Altman:] I think the right answer there is just user choice. Anything I want stricken from the record from my Al agent, I want to be able to take out. If I don't want to remember anything, I want that too. You and I may have different opinions about where on that privacy utility trade off for our own Al opinions about where on that privacy/utility trade-off for OpenAl going to be, which is totally fine. But I think the answer is just really easy user choice.
[...]
[Plagiarist 2, Moderator:] [...] There's just some questions I would love to ask, your intuition about what's GPT able to do and not. So it's allocating approximately the same amount of compute for each token it generates. Is there room there in this kind of approach to slower thinking, sequential thinking?
[Plagiarist 1, Sam Altman:] I think there will be a new paradigm for that kind of thinking.
[Plagiarist 2, Moderator:] Will it be similar architecturally as what we're seeing now with LLMs? Is it a layer on top of LLMs? [We only repeat Cognitive Agent System (CAS) architecture and metalevel architecture.]
[Plagiarist 1, Sam Altman:] I can imagine many ways to implement that. I think that's less important than the question you were getting at, which is, do we need a way to do a slower kind of thinking, where the answer doesn't have to get... I guess spiritually you could say that you want an Al to be able to think harder about a harder problem and answer more quickly about an easier problem. And I think that will be important.
[Plagiarist 2, Moderator:] Is that like a human thought that we just have and you should be able to think hard? Is that wrong intuition?
[...]
[Plagiarist 1, Sam Altman:] It seems to me like you want to be able to allocate more compute to harder problems. It seems to me that if you ask a system like that, "Prove Fermat's Last Theorem," versus, "What's today's date?," unless it already knew and and had memorized the answer to the proof, assuming it's got to go figure that out, seems like that will take more compute.
[Plagiarist 2, Moderator:] But can it look like basically an LLM talking to itself, that kind of thing? [Hyperbingo!!! It is called mind, reflection, gedankenspiel==mind game, simulation in mind, etc. and a capability of our Ontologic roBot (OntoBot) based on Natural Language Processing (NLP), Natural Image Processing (NIP), etc., as we already explained several years ago.]
[Plagiarist 1, Sam Altman:] Maybe. I mean, there's a lot of things that you could imagine working. What the right or the best way to do that will be, we don't know. [For sure, they do not know this, but where to steal it.]
[Plagiarist 2, Moderator:] This does make me think of the mysterious lore behind Q*. What's this mysterious Q* project? [...] [All indicators and suggestions in relation to Q-star point once again to essential parts of our OS. What else?]
[...]
[Plagiarist 1, Sam Altman:] I mean, we work on all kinds of research. We have said for a while that we think better reasoning in these systems is an important direction that we'd like to pursue, We haven't cracked the code yet. We're very interested in it.
[...]
[Plagiarist 1, Sam Altman:] So part of the reason that we deploy the way we do, we call it iterative deployment, rather than go build in secret until we got all the way to GPT-5, we decided to talk about GPT-1, 2, 3, and 4. And part of the reason there is I think Al and surprise don't go together. And also the world, people, institutions, whatever you want to call it, need time to adapt and think about these things. And I think one of the best things that OpenAl has done is this strategy, and we get the world to pay attention to the progress, to take AGI seriously, to think about what systems and structures and governance we want in place before we're under the gun and have to make a rush decision. [...] [We only recall that persons commit suicide and the best years of a lot of girls have been destroyed by deepfakes. And that is only the beginning of a giant mess.]
[...]
[Plagiarist 2, Moderator:] [...] What are some of the biggest challenges and bottlenecks to overcome for whatever it ends up being called, but let's call it GPT-5? Just interesting to ask. Is it on the compute side? Is it on the technical side?
[Plagiarist 1, Sam Altman:] It's always all of these. You know, what's the one big unlock? Is it a bigger computer? Is it a new Secret? Is it something else? It's all of these things together. The thing that OpenAl, I think, does really well... This is actually an original Ilya quote that I'm going to butcher, but it's something like, "We multiply 200 medium-sized things together into one giant thing."
[...]
[Plagiarist 1, Sam Altman:] Look, I think compute is going to be the currency of the future. I think it will be maybe the most precious commodity in the world, and I think we should be investing heavily to make a lot more compute. Compute, I think it's going to be an unusual
market. [...]
But compute is different. Intelligence is going to be more like energy or something like that, where the only thing that I think makes sense to talk about is, at price X, the world will use this much compute, and at price Y, the world will use this much compute. [...]
So I think the world is going to want a tremendous amount of compute. And there's a lot of parts of that that are hard. Energy is the hardest part, [...]. [We only repeat that one can never have enough of 2 things: energy and bandwith.]
[...]
[Plagiarist 2, Moderator:] How do you decrease the theatrical nature of it? I'm already starting to hear rumblings, because I do talk to people on both sides of the political spectrum, hear rumblings where it's going to be politicized. Al is going to be politicized, which really worries me, because then it's like maybe the right is against Al and the left is for Al because it's going to help the people, or whatever the narrative and the formulation is, that really worries me. And then the theatrical nature of it can be leveraged fully. How do you fight that?
[Plagiarist 1, Sam Altman:] I think it will get caught up in left versus right wars. I don't know exactly what that's going to look like, but I think that's just what happens with anything of consequence, unfortunately. [...]
[Plagiarist 2, Moderator:] Well, that's why truth matters, and hopefully Al can help us see the truth of things, to have balance, to understand what are the actual risks, what are the actual dangers of things in the world. What are the pros and cons of the competition in the Space and competing with Google, Meta, xAl, and others?
[Plagiarist 1, Sam Altman:] I think I have a pretty straightforward answer to this that maybe I can think of more nuance later, but the pros seem obvious, which is that we get better products and more innovation faster and cheaper, and all the reasons competition is good. And the con is that I think if we're not careful, it could lead to an increase in sort of an arms race that I'm nervous about.
[Plagiarist 2, Moderator:] Do you feel the pressure of that arms race, like in some negative?
[Plagiarist 1, Sam Altman:] Definitely in some ways, for sure. We spend a lot of time talking about the need to prioritize safety. [...]
[Plagiarist 2, Moderator:] Part of the problem I have with this kind of slight beef with Elon is that there's silos created as opposed to collaboration on the safety aspect of all of this. It tends to go into silos and closed. Open source, perhaps, in the model. [For sure, we are open for business and our Evoos and our OS are open to a certain extent as essential facilities in the limits of the laws being effective.]
[Plagiarist 1, Sam Altman:] [Plagiarist 3] says, at least, that he cares a great deal about Al safety and is really worried about it, and I assume that he's not going to race unsafely. [He has no clue about AI, but is only repeating, what he has heard or seen somewhere and is in his opinion politically correct.]
[Plagiarist 2, Moderator:] Yeah. But collaboration here, I think, is really beneficial for everybody on that front. [We are open for business. Sign, pay, and comply, but before pay a lot of damage compensations.]
[Plagiarist 1, Sam Altman:] Not really the thing he's most known for.
[...]
[Plagiarist 2, Moderator:] Let me ask you, Google, with the help of search, has been dominating the past 20 years. Think it's fair to say, in terms of the world's access to information, how we interact and so on, and one of the nerve-wracking things for Google, but for the entirety of people in the space, is thinking about, how are people going to access information? Like you said, people show up to GPT as a starting point. So is OpenAl going to really take on this thing that Google started 20 years ago, which is how do we get
[Plagiarist 1, Sam Altman:] I find that boring. I mean, if the question is if we can build a better search engine than Google or whatever, then sure, we should go, people should use the better product, but I think that would so understate what this can be. Google shows you 10 blue links, well, 13 ads and then 10 blue links, and that's one way to find information. But the thing that's exciting to me is not that we can go build a better copy of Google search, but that maybe there's just some much better way to help people find and act on and synthesize information. Actually, I think ChatGPT is that for some use cases, and hopefully we'll make it be like that for a lot more use cases.
But I don't think it's that interesting to say, "How do we go do a better job of giving you 10 ranked webpages to look at than what Google does?" Maybe it's really interesting to go say, "How do we help you get the answer or the information you need?
How do we help create that in some cases, synthesize that in others, or point you to it in yet others?" But a lot of people have tried to just make a better search engine than Google and it is a hard technical problem, it is a hard branding problem, it is a hard ecosystem problem. I don't think the world needs another copy of Google.
[Plagiarist 2, Moderator:] And integrating a chat client, like a ChatGPT, with a search engine [We only recall that our Ontologic roBot (OntoBot) and our Ontologic Search (OntoSearch) and Ontologic Find (OntoFind) are already here.]
[Plagiarist 1, Sam Altman:] That's cooler.
[Plagiarist 2, Moderator:] It's cool, but it's tricky. Like if you just do it simply, its awkward, because if you just shove it in there, it can be awkward.
[Plagiarist 1, Sam Altman:] As you might guess, we are interested in how to do that well. That would be an example of a cool thing.
[Plagiarist 2, Moderator:] Like a heterogeneous integrating
[Plagiarist 1, Sam Altman:] The intersection of LLMs plus search, I don't think anyone has cracked the code on yet. I would love to go do that. I think that would be cool. [And just another lie about our existence. See the related comment to the quote before.]
[...]
[Plagiarist 1, Sam Altman:] Well, we have to figure out how to grow, but looks like we're going to figure that out. If the question is do I think we can have a great business that pays for our compute needs without ads, that, I think the answer is yes. [We do not think that OpenAI will be able to compete, specifically not on the basis of illegal plagiarisms and fakes of the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S..]
[...]
[Plagiarist 1, Sam Altman:] I mean, we work super hard not to do things like that. We've made our own mistakes, we'll make others. I assume Google will learn from this one, still make others. These are not easy problems. One thing that we've been thinking about more and more, I think this is a great idea somebody here had, it would be nice to write out what the desired behavior of a model is, make that public, take input on it, say, "Here's how this model's supposed to behave," and explain the edge cases too. And then when a model is not behaving in a way that you want, it's at least clear about whether that's a bug the company should fix or behaving as intended and you should debate the policy. And right now, it can sometimes be caught in between. Like Black Nazis, obviously ridiculous, but there are a lot of other kind of subtle things that you could make a judgment call on either way. [He just has no clue about what he is talking, because he lacks at least 20 years of experiences made in the 1980s and 1990s in the fields of Bionics, specifically the fields of Artificial Intelligence (AI), Machine Learning (ML), and Artificial Neural Network (ANN), and even that is not enough, as we have seen with some very well known senior plagiarists and liars. We also note the integration of the field called Artificial Intelligence 3 (AI3) by us for better understanding, specifically the integration of subsymbolic and symbolic processing, including case-based reasoning, included in our Evoos and our OS, and also integrated with our coherent OM, including MMANN, what is wrongly called FM, and LLM, our generative and creative Bionics, and also 3D and XR spaces, environments, worlds, and universes by our Evoos and our OS. It also looks like the simulation of an ordinary technological progress, which we have already gone through around 2 decades ago on the one hand and mentioned several times in the past as one of the dirty tricks on the other hand. But a trick only works one time, if at all.]
[...]
[Plagiarist 1, Sam Altman:] Yeah, I'm open to a lot of ways a model could behave, then, but I think you should have to say, "Here's the principle and here's what it should say in that case." [He just does not get it. See also the comment to the quote before. Howsoever, we are open for business.]
[...]
[Plagiarist 2, Moderator:] So what, in general, is the process for the bigger question of safety? How do you provide that layer that protects the model from doing crazy, dangerous things?
[Plagiarist 1, Sam Altman:] I think there will come a point where that's mostly what we think about, the whole company. And it's not like you have one safety team. It's like when we shipped GPT-4, that took the whole company thinking about all these different aspects and how they fit together. And I think it's going to take that. More and more of the company thinks about those issues all the time. [Oh, what a [net zero cursing]. He is doing something, that has safety and security issues, if done wrong, like OpenAI and Co. are already doing, and might get out of control, and then they will do the truly hard work and think about.]
[Plagiarist 2, Moderator:] That's literally what humans will be thinking about, the more powerful Al becomes. So most of the employees at OpenAl will be thinking, "Safety," or at least to some degree.
[Plagiarist 1, Sam Altman:] Broadly defined. Yes. [Oh, what a [net zero cursing]. We are talking about a very large, complex, opaque, powerful, morally and culturally unethical, and from time to time fantasizing and hallucinating bionic system, which in case of ML and ANN is based on probability theory (also called as probabilistics) and statistcs, specifically Bayesian Statistics (BS), and part of what is called emergence-based AI 2 by us for better understanding, and he still claims to be able to control it, while simultaneously warning and panicking the whole world about the dangers. We already said in relation to social media platforms that they need a system, which is at least 3 times larger (see also the note Meta (Facebook) totally incompetent of the 6th of February 2024). When a fire is out of control, then one can only run away, and those fire starters are the first ones, who will run away even without warning the others.]
[...]
[Plagiarist 2, Moderator:] [...] Sorry to linger on this, even though you can't quite say details yet, but what aspects of the leap from GPT-4 to GPT-5 are you excited about?
[Plagiarist 1, Sam Altman:] I'm excited about being smarter. And I know that sounds like a glib answer, but I think the really special thing happening is that it's not like it gets better in this one area and worse at others. It's getting better across the board. That's, I think, super-cool. [Yeah, it is so super-cool to have a system, which becomes smarter and better across the board, including in fantasizing and hallucinating, and also lying, mind manipulation, and psychological terrorization, as well as hacking, and so on. Why not connecting those chatbots with the red button for the real bomb? That would also be super-cool. Is not it? We are very sure that our fans and readers can see the next giant flaw in that way of thinking.]
[Plagiarist 2, Moderator:] Yeah, there's this magical moment. I mean, you meet certain people, you hang out with people, and you talk to them. You can't quite put a finger on it, but they get you. It's not intelligence, really. It's something else. And that's probably how I would characterize the progress of GPT. It's not like, yeah, you can point out, "Look, you didn't get this or that," but it's just to which degree is there's this intellectual connection. You feel like there's an understanding in your crappy formulated prompts that you're doing that it grasps the deeper question behind the question that you were. Yeah, I'm also excited by that. I mean, all of us love being heard and understood. [This is just another manifestation of mentalism and Neuro-Linguistic Programming (NLP or NeuroLP) of the one-trick pony, as explained in the Clarification of the 3rd of March 2024. Such a system does not connect with a user, but only simulates and reflects the intelligence, connection, emotion, etc., which is included in the training data or the real-time data streams. See also the section Annotations of the webpage Roboverse of the website of OntoLinux.]
[...]
[Plagiarist 2, Moderator:] That's a weird feeling. Even with a programming, when you're programming and you say something, or just the completion that GPT might do, it's just such a good feeling when it got you, what you're thinking about. And I look forward to getting you even better. On the programming front, looking out into the future, how much programming do you think humans will be doing 5, 10 years from now?
[Plagiarist 1, Sam Altman:] I mean, a lot, but I think it'll be in a very different shape. Maybe some people will program entirely in natural language. [And here we have our Ontologic Programming (OP) paradigm, which integrates literate programming and is integrated in our Ontologic roBot (OntoBot), and has been described in the message OntoLix and OntoLinux Further steps of the 10th of March 2013. And now all together: HyperBingo!!!]
[...]
[Plagiarist 2, Moderator:] [...] But that changes the nature of what the skillset or the predisposition for the kind of people we call programmers then.
[Plagiarist 1, Sam Altman:] Changes the skillset. How much it changes the predisposition, I'm not sure. [At first, it changes the skillset, then later it also changes the predisposition.]
[...]
[Plagiarist 2, Moderator:] Will we see humanoid robots or humanoid robot brains from OpenAl at some point?
[Plagiarist 1, Sam Altman:] At some point. [We have already documented the next copyright infringement due to the integration of a plagiarism and fake with a humanoid robot, which even learns by watching videos (see also for example our AutoBrain, specifically the section Learning by Doing of the referenced webpage).]
[...]
[Plagiarist 1, Sam Altman:] We're a small company. We have to really focus. And also, robots were hard for the wrong reason at the time, but we will return to robots in some way at some point.
[...]
[Plagiarist 2, Moderator:] So to you, you're looking for some really major transition in how the world
[Plagiarist 1, Sam Altman:] For me, that's part of what AGI implies.
[Plagiarist 2, Moderator:] Singularity- level transition? [Here we have the term singularity, which we also used in relation to our OS. See the webpage Introduction of the website of OntoLinux.]
[Plagiarist 1, Sam Altman:] No, definitely not.
[Plagiarist 2, Moderator:] But just a major, like the internet being, like Google search did, I guess. What was the transition point, you think, now? [Why was Google search a major transition? The already existing search engine Alta Vista was quite fine and only needed to be scaled up, which required a profitable business model. Howsoever, we have the consensus of a large group of entities if not the whole world in relation to another major transition besides the other major transition based on another essential part of our OS, which is wrongly called Cloud-native Computing (CnC), which has a causal link to our Evoos and our OS, which again confirms the exclusive rights and properties of C.S. and our corporation.]
[Plagiarist 1, Sam Altman:] Does the global economy feel any different to you now or materially different to you now than it did before we launched GPT-4? I think you would say no. [The excitement for and the appeal of the new is over. The party is over.]
[Plagiarist 2, Moderator:] No, no. It might be just a really nice tool for a lot of people to use. Will help you with a lot of stuff, but doesn't feel different. And you're saying that
[...]
[Plagiarist 2, Moderator:] [...] What kind of stuff would you talk about [with the AGI]?
[Plagiarist 1, Sam Altman:] [...] I find it surprisingly difficult to say what I would ask that I would expect that first AGI to be able to answer. That first one is not going to be the one which is like, I don't think, "Go explain to me the grand unified theory of physics, the theory of everything for physics." I'd love to ask that question. I'd love to know the answer to that question. [So now we are at our Caliber/Calibre of our OS, which "also constitutes the best Universal Theory of Everything, because it is the first and still only active theory" (see the Clarification of the 8th of October 2009).]
[Plagiarist 2, Moderator:] You can ask yes or no questions about "Does such a theory exist? Can it exist?" [And here we have ontology proper.]
[Plagiarist 1, Sam Altman:] "Well, then, those are the first questions I would ask.
[Plagiarist 2, Moderator:] Yes or no. And then based on that, "Are there other alien civilizations out there? Yes or no? What's your intuition?" And then you just ask that.
[Plagiarist 1, Sam Altman:] Yeah, I mean, well, so I don't expect that this first AGI could answer any of those questions even as yes or nos. But if it could, those would be very high on my list.
[...]
[Plagiarist 1, Sam Altman:] [...] Maybe we need to go invent more technology and measure more things first.
[...]
[Plagiarist 1, Sam Altman:] I mean, maybe it says, "You want to know the answer to this question about physics, I need you to build this machine and make these five measurements, and tell me that."
[Plagiarist 2, Moderator:] And on the mathematical side, maybe prove some things. Are you interested in that side of things, too? The formalized exploration of ideas?
[Plagiarist 1, Sam Altman:] Mm-hmm.
[Plagiarist 2, Moderator:] Whoever builds AGI first gets a lot of power. Do you trust yourself with that much power? [The question is not, who builds AGI first, but who owns it. And it is obvious, as one can see with all those plagiarisms and fakes, that AGI is (based on) our Evoos and our OS. And C.S. is C.S., and the owner of C.S. and all related self-reflections, self-images, or self portraits, and cybernetic reflection, augmentation, and extension, and therefore C.S. holds the moral rights for our Evoos and our OS, and also complies with the laws. Period.]
[Plagiarist 1, Sam Altman:] Look, I'll just be very honest with this answer. I was going to say, and I still believe this, that it is important that I nor any other one person have total control over OpenAl or over AGI. And I think you want a robust governance system. [...] [We hold the rights and properties for the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S. and have our collecting Society for Ontological Performance and Reproduction (SOPR). Period.]
[...] I continue to not want super-voting control over OpenAl. I never have. Never have had it, never wanted it. Even after all this craziness, I still don't want it. I continue to think that no company should be making these decisions, and that we really need governments to put rules of the road in place. [He only wants this, because he is unable to steal our original and unique works of art otherwise. That is all behind all that blah blah blah. And governments have to comply with the constitution or the basic right or both, which grants C.S. exclusive freedoms, rights, and properties. Period.]
And I realize that that means people like [a fraudulent venture capitalist] or whatever will claim I'm going for regulatory capture, and I'm just willing to be misunderstood there. It's not true. And I think in the fullness of time, it'll get proven out why this is important. But I think I have made plenty of bad decisions for OpenAl along the way, and a lot of good ones, and I'm proud of the track record overall. But I don't think any one person should, and I don't think any one person will. I think it's just too big of a thing now, and it's happening throughout society in a good and healthy way. But I don't think any one person should be in control of an AGI, or this whole movement towards AGI. And I don't think that's what's happening. [Once again, C.S. is C.S., and is the owner of C.S. and all related self-reflections, self-images, or self portraits, and cybernetic reflection, augmentation, and extension, and therefore C.S. holds the moral rights for our Evoos and our OS, and also complies with the laws, and therefore our collecting SOPR demands the payment of damage compensations and the transfer of all illegal materials. Period.]
[...]
[Plagiarist 2, Moderator:] Are you afraid of losing control of the AGI itself? That's a lot of people who are worried about existential risk not because of state actors, not because of security concerns, because of the Al itself.
[Plagiarist 1, Sam Altman:] That is not my top worry as I currently see things. There have been times I worried about that more. There may be times again in the future where that's my top worry. It's not my top worry right now.
[...]
[Plagiarist 2, Moderator:] [...] Given Sora's ability to generate simulated worlds, let me ask you a pothead question. Does this increase your belief, if you ever had one, that we live in a simulation, maybe a simulated world generated by an Al system? [So here we are at the fields of Simualted Reality (SR or SimR) and Synthetic Reality (SR or SynR), which both are part of our OS, and the so-called simulation hypothesis or simulation argument in the field of Philosophy. We also have the aspect of our OS, when viewed as a belief system.]
[Plagiarist 1, Sam Altman:] Somewhat. I don't think that's the strongest piece of evidence. I think the fact that we can generate worlds should increase everyone's probability somewhat, or at least openness to it somewhat. But I was certain we would be able to do something like Sora at some point. It happened faster than I thought, but I guess that was not a big update.
[Plagiarist 2, Moderator:] Yeah. But the fact that ... And presumably, it'll get better and better and better ... You can generate worlds that are novel, they're based in some aspect of training data, but when you look at them, they're novel, that makes you think how easy it is to do this thing. How easy it is to create universes, entire video game worlds that seem ultra-realistic and photo-realistic. And then how easy is it to get lost in that world, first with a VR headset, and then on the physics-based level? [And here we have (information) spaces, environments, worlds, and universes respectively realities, which are fusioned to our New Reality (NR), which again is manifested by our Ontoverse (Ov), and integrated with our OntoBot, generative and creative Bionics, and so on. This shows once again why Sora is a copyright infringement, because it is our coherent Ontologic Model (OM), including FMs and LLMs, with our eXtended Mixed Reality (XMR) or simply eXtended Reality (XR), and so on. See also the Clarification of the 16th of April 2016, the Clarification #2 of the 20th of May 2017, and the Clarification of the 8th of November 2019, and also the Investigations::Multimedia of the 22nd of December 2021.]
[Plagiarist 1, Sam Altman:] Someone said to me recently, I thought it was a super-profound insight, that there are these very-simple sounding but very psychedelic insights that exist sometimes. So the square root function, square root of four, no problem. Square root of two, okay, now I have to think about this new kind of number. But once I come up with this easy idea of a square root function that you can explain to a child and exists by even looking at some simple geometry, then you can ask the question of "What is the square root of negative one?" And this is why it's a psychedelic thing. That tips you into some whole other kind of reality. [See Jean Baudrillard "Simulacra and Simulation" and Hyperreality and its origins, and Umberto Eco "Travels In Hyperreality". According to an online encyclopedia "Baudrillard and Eco explained that it is "the unlimited existence of "hyperreal" number or "non-standard reals", infinite and infinitesimal, that cluster about assumedly fixed or real numbers and factor through transference differentials.""]
And you can come up with lots of other examples, but I think this idea that the lowly square root operator can offer such a profound insight and a new realm of knowledge applies in a lot of ways. And I think there are a lot of those operators for why people may think that any version that they like of the simulation hypothesis is maybe more likely than they thought before. But for me, the fact that Sora worked is not in the top five. [We do not believe that statement.]
[Plagiarist 2, Moderator:] I do think, broadly speaking, Al will serve as those kinds of gateways at its best, simple, psychedelic-like gateways to another wave C reality. [See the section Paving the Way by Bridging the Gap of the webpage of our Caliber/Calibre.]
[Plagiarist 1, Sam Altman:] That seems for certain. [And it does not seem for certain, but is just another fact that we have here our OS with our NR and Ov (see for example the Clarification of the 16th of April 2016 and the comments to the preceeding quotes once again).]
[...]
[Plagiarist 1, Sam Altman:] One thing that I wonder about, is AGI going to be more like some single brain, or is it more like the scaffolding in society between all of us? [...] [...W]hat you have is this scaffolding that we all contributed to built on top of. No one person is going to go build the [Ontoscope (Os)]. No one person is going to go discover all of science, and yet you get to use it. And that gives you incredible ability. And so in some sense, that we all created that, and that fills me with hope for the future. That was a very collective thing. [We are sure our fans and readers only waited for this topic of the Global Brain (GB), which is also included in our OS. But we have here something.]
[...]
[Plagiarist 2, Moderator:] [...] And now let me leave you with some words from Arthur C. Clarke. "It may be that our role on this planet is not to worship God, but to create him." [...] [As we always emphasize, we do not like to talk about our Ontologic System in this way, though we created the term Pocket God, and discussed our belief system, spirit, aether, and environment, ghost in the machine, or ghost in the shell etc. (see for example the Clarification of the 2nd of June 2012 and Clarification of the 16th of April 2016).]"
Comment
The big girls and boys have already taken over OpenAI respectively that we have already taken back our rights and properties, and that the mission of OpenAI is finally over.
Who tells it Sam Altman? Ooops.
| |
|