 |
|
23:53 UTC+2
OneAPI and UXL blacklisted
Unified Acceleration Foundation (UXL)
The same fraudulent and even serious criminal actors, goals, strategies, plans, tricks, activities, and so on as in the cases of what is wrongly and illegally called smartphone and Artificial Intelligence (AI) phone, Cloud, Edge, and Fog Computing (CEFC), Cloud-native Computing (CnC), crypto, Large Language Model (LLM), Foundation Model (FM), chatbot, Conversational Artificial Intelligence (CAI), Retrieval Augmented Generation (RAG), etc., etc., etc., and therefore the same legal issues, prohibitions, regulations, etc., etc., etc..
A trick only works once.
A related Clarification of the 3rd of September 2024 (tomorrow) is already in preparation.
00:09 UTC+2
Support of illegal FOSS and start-ups is dealbreaker
Anyone, who supports illegal Free and Open Source Software (FOSS) and start-ups, does not want a deal.
00:32 and 21:15 UTC+2
Clarification
*** Work in progress - better wording and comments ***
From time to time we come across the field of heterogeneous computing and every time this happens we get the impression that we are reading something related to essential parts of our original and unique Ontologic System (OS) with its Ontologic System Architecture (OSA).
But in contrast to other legal issues, where we do not need to ask us, if our OS has been taken as source of inspiration and blueprint and the rights and properties of C.S. and our corporaiton have been infringed, we could not directly say that this is also the case in relation to unified heterogeneous computing on the basis of dissimilar co-processors.
This clarification shows why our first impression was once again correct and also that the context is much larger.
For a better understanding, we first quote some common sense explanations about the subjects
graphics card,
Artificial Intelligence (AI) accelerator or Neural Processing Unit (NPU),
Cell Broadband Engine (Cell BE),
CUDA (originally Compute Unified Device Architecture),
Open Hybrid Multicore Parallel Programming (OpenHMPP),
Open Computing Language (OpenCL),
C++ Accelerated Massive Parallelism (C++ AMP),
SYCL (originally SYstem-wide Compute Language),
Heterogeneous System Architecture (HSA),
ROCm (originally Radeon Open Compute platform),
Graphics Core Next (GCN),
oneAPI, and
Unified Acceleration Foundation (UXL), and also
Stargate and other supercomputers.
We quote an online encyclopedia about the subject graphics card: "A graphics card (also called a video card, display card, graphics accelerator, graphics adapter, VGA card/VGA, video adapter, display adapter, or colloquially GPU) is a computer expansion card that generates a feed of graphics output to a display device such as a monitor. Graphics cards are sometimes called discrete or dedicated graphics cards to emphasize their distinction to an integrated graphics processor on the motherboard or the central processing unit (CPU). A graphics processing unit (GPU) that performs the necessary computations is the main component in a graphics card, but the acronym "GPU" is sometimes also used to erroneously refer to the graphics card as a whole.[1]
Most graphics cards are not limited to simple display output. The graphics processing unit can be used for additional processing, which reduces the load from the CPU.[2] Additionally, computing platforms such as OpenCL and CUDA allow using graphics cards for general-purpose computing. Applications of general-purpose computing on graphics cards include AI training, cryptocurrency mining, and molecular simulation.[3][4][5]
[...]
History
[...]
Within the industry, graphics cards are sometimes called graphics add-in-boards, abbreviated as AIBs,[8] with the word "graphics" usually omitted.
[...]
Specific usage
Some GPUs are designed with specific usage in mind:
1. Gaming
[...]
2. Cloud gaming
[...]
3. Workstation
[...]
4. Cloud Workstation
[...]
5. Artificial Intelligence Cloud
[...]
6. Automated/Driverless car
[...]
[...]
See also
[...]
[General-Purpose computing on Graphics Processing Units (]GPGPU[)] (i.e.: CUDA, AMD FireStream)
[...]"
Comment
We note
Graphics Processing Unit (GPU),
Add-In-Board (AIB), and
General-Purpose Computing on Graphics Processing Units (GPGPU).
If we discuss the field of graphics card or the field of GPU does not matter, because a graphics card has a GPU and the difference of both is irrelevant in relation to the architecture and software discussed in this clarification.
But very relevant for legal reasons are the
specific usage scenarios 2. Cloud gaming, 4. Cloud Workstation, 5. Artificial Intelligence Cloud, and 6. Automated/Driverless car, and
technological evolution in this field,
because both areas have a causal link with our OS and its OSA, on which this clarification focuses.
We quote an online encyclopedia about the subject Artificial Intelligence (AI) accelerator or Neural Processing Unit (NPU): "An AI accelerator, deep learning processor or neural processing unit (NPU) is a class of specialized hardware accelerator[1] or computer system[2][3] designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and computer vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks.[4] They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. [...]
AI accelerators such as neural processing units (NPUs) are used in mobile devices such as Apple iPhones and Huawei cellphones,[6] and personal computers such as [...] laptops[...] and [workstations].[...] Accelerators are used in cloud computing servers, including [specialized chips in cloud computing platforms].[...] A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design.
Graphics processing units [...] often include AI-specific hardware, and are commonly used as AI accelerators, both for training and inference.[11]
History
Computer systems have frequently complemented the CPU with special-purpose accelerators for specialized tasks, known as coprocessors. Notable application-specific hardware units include video cards for graphics, sound cards, graphics processing units and digital signal processors. As deep learning and artificial intelligence workloads rose in prominence in the 2010s, specialized hardware units were developed or adapted from existing products to accelerate these tasks.
Early attempts
[...]
In the 1990s, there were also attempts to create parallel high-throughput systems for workstations aimed at various applications, including neural network simulations.[16 [Designing a connectionist network supercomputer. [January 1994]]][17]
FPGA-based accelerators were also first explored in the 1990s for both inference and training.[18 [Space Efficient Neural Net Implementation. [February 1995]]][19 [A Generic Building Block for Hopfield Neural Networks with On-Chip Learning. [1996]]]
[...]
Smartphones began incorporating AI accelerators starting with the Qualcomm Snapdragon 820 in 2015.[25][26]
Heterogeneous computing
Heterogeneous computing incorporates many specialized processors in a single system, or a single chip, each optimized for a specific type of task. Architectures such as the Cell microprocessor[27 [Synergistic Processing in Cell's Multicore Architecture. [2006]]] have features significantly overlapping with AI accelerators including: support for packed low precision arithmetic, dataflow architecture, and prioritizing throughput over latency. The Cell microprocessor has been applied to a number of tasks[28][29][30] including AI.[31 [Development of an artificial neural network on a heterogeneous multicore architecture to predict [a quantity ...]. [March 2008]]][32][33]
Use of GPUs
Graphics processing units or GPUs are specialized hardware for the manipulation of images and calculation of local image properties. The mathematical basis of neural networks and image manipulation are similar, embarrassingly parallel tasks involving matrices, leading GPUs to become increasingly used for machine learning tasks.[35 [High Performance Convolutional Neural Networks for Document Processing. [23rd of October 2006 or 9th of November 2006]]][36]
[...]
Use of FPGAs
[...]
Use of NPUs
Since 2017, several CPUs and SoCs have on-die NPUs [...].
Emergence of dedicated AI accelerator ASICs
While GPUs and FPGAs perform far better than CPUs for AI-related tasks, a factor of up to 10 in efficiency[46][47] may be gained with a more specific design, via an application-specific integrated circuit (ASIC).[48] [...]
Ongoing research
In-memory computing architectures
[...]"
Comment
First of all, we note that the content of the webpage has been written in a way to mislead the public.
We also note the
lack of general support for heterogeneous computing and AI accelerators by an operating system and
very small number or even lack of truly relevant prior art works.
The use of FPGAs was already common in the 1990s in the field of Evolutionary Computing (EC), including Genetic Programming (GP).
OntoLinux Website update of the 1st of August 2007
mobile device
robot
See the Feature-List #1 22nd of April 2008 of our OS for the list points
Multiprocessing (see Linux)
Parallel operating of graphic cards, [accelerators, adapters, or colloquially GPUs] and other multimedia cards, [accelerators, adapters, or colloquially MPUs] from different manufacturers
User centric migration of applications and data from one computing machine to another, even from a personal computer to a cell phone, or an automotive media center
Cluster functionality.
The second list-point also includes GPU utilized as AI-accelerator or NPU due to our Evoos based on bionics (e.g. AI, ML, CI, ANN, etc.) integrated by our OS with its OSA and hence all other Accelerated Processing Units (APUs).
The third list-point is a slightly encrypted description of the field of Cloud Computing of the second generation (CC 2.0).
all kinds of multimedia card
Evoos is about AI, ML, CI, ANN, and therefore GPU as NPU always included in OS.
What is wrongly called Cloud, Edge, and Fog Computing (CEFC) is part of our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV), which collectively are our Ontoverse (Ov) and New Reality (NR).
See also the section Hardware-Technologie of the Innovation-Pipeline of Ontonics for
Neural Net processor,
SuperComputing processor, and
SuperComputer,
and MultiCore Competence (MCC).
See also the comment to the quote about CUDA below.
Furthermore, what is wrongly and illegally called smartphone is our Ontoscope (Os), which is now wrongly and illegally called AI phone.
Android smartphone (e.g. Huawei cellphone), Apple iPhone, etc. are Ontoscope (Os) variants.
We quote an online encyclopedia about the subject Cell Broadband Engine (Cell BE) of the 21st of July 2004 (3rd version): "The Cell is a [Multi-Core Processor (MCP) architecture] being developed by Toshiba, IBM, and Sony. The Cell chip is intended to be scalable from handheld devices to mainframe computers by utilizing parallel processing. Sony plans to use the chip in their PlayStation 3 game console.
The system is said to consist of a number of more or less identical units, which software then allocates to process computing tasks in parallel. There will be several versions of the Cell chip with varying number of processing units depending on the device where the chip is used. The companies designing the chip has claimed that by scaling the number of units in the chip, supercomputer-like performance can be made available in consumer devices."
We quote an online encyclopedia about the subject Cell Broadband Engine (Cell BE) of the 28th of December 2004 (10th version): "The Cell is a [Multi-Core Processor (MCP) architecture] being developed by Toshiba, IBM, and Sony. The Cell chip is intended to be scalable from handheld devices to mainframe computers by utilizing parallel processing. Sony plans to use the chip in their PlayStation 3 game console.
While the Cell chip can have a number of different configurations, the workstation and PlayStation 3 version of Cell consists of one "Processing Element" ("PE"), and eight "Attached Processing Units" ("APU"). The PE, said to be based on the POWER Architecture by IBM, is a kind of "traffic cop" for the underlying APUs. The APUs are the real computational power of the chip. [...] Cell allows for multiple processing units to be put onto one die, and the patent showed four on one die, called the "Broadband Engine" [...]. [...]
[...]
Efforts to create similar multiple-core processors by Sun Microsystems, including MAJC (pronounced "magic"), a very similar effort, have missed their mark. The first MAJC chip was originally designed, similar to IBM's Cell, for multimedia processing, but instead of selling the chip to set-top box and game machine manufacturers, Sun repositioned the MAJC chip as a high-end graphics processor for workstations.
[...]"
Comment
We note
supercomputer in a pocket.
We quote an online encyclopedia about the subject Cell Broadband Engine (Cell BE) of the 31st of October 2006: "Cell is a microprocessor architecture jointly developed by a Sony, Toshiba, and IBM alliance known as STI. The architectural design and first implementation were carried out [...] over a four-year period beginning March 2001 [...].
[...] Cell combines a general-purpose Power Architecture core of modest performance with streamlined coprocessing elements which greatly accelerate multimedia and vector processing applications, as well as many other forms of dedicated computation.
[...] Exotic features such as the XDR memory subsystem and coherent [Element Interconnect Bus (]EIB[)] interconnect [1] appear to position Cell for future applications in the supercomputing space to exploit the Cell processor's prowess in floating point kernels.
The Cell architecture breaks ground in combining a light-weight general-purpose processor with multiple GPU-like coprocessors into a coordinated whole, a feat which involves a novel memory coherence architecture for which IBM received many patents.
The architecture emphasizes [power] efficiency[...], prioritizes bandwidth over latency, and favors peak computational throughput over simplicity of program code. For these reasons, Cell is widely regarded as a challenging environment for software development. IBM provides a comprehensive Linux-based Cell development platform to assist developers in confronting these challenges. [...]
[...]
[...]
Overview
The Cell Broadband Engine [...] is a microprocessor designed to bridge the gap between conventional desktop processors [...] and more specialised high-performance processors, such as [...] graphics-processors (GPUs). The name belies its intended use, namely as a component in current and future digital distribution systems; as such it may be utilised in high-definition displays and recording equipment, as well as computer entertainment systems [...]. Additionally the processor should be well suited to digital imaging systems (medical, scientific, etc.) as well as physical simulation (e.g. scientific and structural engineering modelling).
In a simple analysis the Cell processor can be split into four components: external input and output structures, the main processor called the Power Processing Element (PPE) [...], eight fully-functional co-processors called the Synergystic Processing Elements or SPEs and a specialised high-bandwidth circular data bus connecting the PPE, input/output elements and the SPEs, called the Element Interconnect Bus or EIB.
To achieve the high performance needed for mathematically intensive tasks [...] the Cell processor simply marries the SPEs and the PPE via the EIB to give both access to main memory or other external data storage. The PPE which is capable of running a conventional operating system has control over the SPEs and can start, stop, interrupt and schedule processes running on the SPEs. To this end the PPE has additional instructions relating to control of the SPEs. Despite having Turing complete architectures the SPEs are not fully autonomous and require the PPE to initiate them before they can do any useful work. Most of the "horsepower" of the system comes from the synergistic processing elements.
The PPE and bus architecture includes various modes of operation giving different levels of protection, allowing areas of memory to be protected from access by specific processes running on the SPEs or PPE.
Influence and contrast
In some ways the Cell system resembles early Seymour Cray designs in reverse. The famed CDC 6600 used a single very fast processor to handle the mathematical calculations, while a series of ten slower systems were given smaller programs to keep the main memory fed with data. In the Cell the problem has been reversed: reading the data is no longer the difficult problem due to the complex encodings used in industry; today the problem is efficiently decoding that data into an ever-less-compressed version as quickly as possible.
Modern graphics cards have multiple elements very similar to the SPE's, known as shader units, with an attached high speed memory. Programs, known as shaders, are loaded onto the units to process the input data streams fed from the previous stages (possibly the CPU), according to the required operations.
The main differences are that the Cell's SPEs are much more general purpose than shader units, and the ability to chain the SPEs under program control offers considerably more flexibility, allowing the Cell to handle graphics, sound, or anything else.
Architecture
[...] Due to the nature of its applications, Cell is optimized towards single precision floating point computation. The SPEs are capable of performing double precision calculations, albeit with an order of magnitude performance penalty. More general purpose computing tasks can be done on the PPE.
Power Processor Element
[...] The PPE is not intended to perform all primary processing for the system, but rather to act as a controller for the other eight SPEs, which handle most of the computational workload. [...]
Synergistic Processing Elements (SPE)
Each SPE is composed of a "Synergistic Processing Unit" ("SPU"), and a "Memory Flow Controller" ("MFC") (DMA, MMU, and bus interface). [8] [...] With the current generation of the Cell, each SPE contains a 256 KiB instruction and data local memory area (called "local store") which is visible to the PPE and can be addressed directly by software. Each SPE can support up to 4 GB of local store memory. The local store does not operate like a conventional CPU cache since it is neither transparent to software nor does it contain hardware structures that predict which data to load. [...] Note that the SPU processor can not directly access system memory; the 64-bit memory addresses formed by the SPU must be passed from the SPU processor to the SPE memory flow controller (MFC) to set up a DMA operation within the system address space.
In one typical usage scenario, the system will load the SPEs with small programs (similar to threads), chaining the SPEs together to handle each step in a complex operation. For instance, a set-top box might load programs for reading a DVD, video and audio decoding, and display, and the data would be passed off from SPE to SPE until finally ending up on the TV. Another possibility is to partition the input data set and have several SPEs performing the same kind of operation in parallel. [...]
[...]
The research also showed that Cell is overall 3 to 12 times faster on every type of high performance computation tasks. Authors concluded, "Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency. We also conclude that Cell's heterogeneous multi-core implementation is inherently better suited to the HPC environment than homogeneous commodity multi-core processors."
[...]
Element Interconnect Bus (EIB)
The EIB is a communication bus internal to the Cell processor which connects the various on-chip system elements: the PPE processor, the memory controller (MIC), the eight SPE coprocessors, and two off-chip I/O interfaces, for a total of 12 participants. The EIB also includes an arbitration unit which functions as a set of traffic lights. In some documents IBM refers to EIB bus participants as 'units'.
The EIB is presently implemented as a circular ring [...]
[...]
Data flows on an EIB channel stepwise around the ring. Since there are twelve participants, the total number of steps around the channel back to the point of origin is twelve. Six steps is the longest distance between any pair of participants. An EIB channel is not permitted to convey data requiring more than six steps; such data must take the shorter route around the circle in the other direction. The number of steps involved in sending the packet has very little impact on transfer latency: the clock speed driving the steps is very fast relative to other considerations. However, longer communication distances are detrimental to the overall performance of the EIB as they reduce available concurrency.
Despite IBM's original desire to implement the EIB as a more powerful cross-bar, the circular configuration they adopted to spare resources rarely represents a limiting factor on the performance of the Cell chip as a whole. In the worst case, the programmer must take extra care to schedule communication patterns where the EIB is able to function at high concurrency levels.
[...]
Memory controller and I/O
[...]
Broadband engine
[...] It is believed that Cell allows for multiple processing cores to be put onto one die [...]. Sony, Toshiba, and IBM have claimed that they intend to scale the processor for various uses, both low-end and high-end, by varying the number of cores on the chip, the number of units in a single core, and by linking multiple chips to each other via network or memory bus.
Possible applications
Blade server
[...]
Console videogames
[...]
Home cinema
[...]
Super computing [Supercomputer]
[...]
[PlayStation 3 computer cluster [Building Supercomputer Using Playstation 3. 28th of August 2006]]
Cell accelerator board
Mercury Computer Systems offers a PCI Express accelerator card based on the Cell processor in a package designed for high-performance environments. This solution offers order-of-magnitude faster processing for graphics, image, and signal processing workloads. Performance scales dramatically when the application is distributed across multiple Cell Accelerator Boards in a cluster or across the network. Mercury has mapped key algorithms onto the solution, significantly increasing the performance advantages for high-performance computing applications.[17]
Software engineering
Due to the flexible nature of the Cell, there are several possibilities for the utilization of its resources: [18]
Job queue
The PPE maintains a job queue, schedules jobs in SPEs, and monitors progress. Each SPE runs a "mini kernel" whose role is to fetch a job, execute it, and synchronize with the PPE.
Self-multitasking of SPEs
The kernel and scheduling is distributed across the SPEs. Tasks are synchronized using mutexes or semaphores as in a conventional operating system. Ready-to-run tasks wait in a queue for a SPE to execute them. The SPEs use shared memory for all tasks in this configuration.
Stream processing
Each SPE runs a distinct program. Data comes from an input stream, and is sent to SPEs. When an SPE has terminated the processing, the output data is sent to output stream.
This actually provides a very flexible, yet powerful architecture for stream processing, allowing to explicitly schedule each SPE separately. Other processors are also able to perform this kind of processing but this comes often with limitations on the possible kernels to be loaded.
[...]"
Comment
"Memory coherence is an issue that affects the design of computer systems in which two or more processors or cores share a common area of memory."
Indeed, the Cell Broadband Engine (Cell BE) has an advanced Multi-Core Processor (MCP) architecture and a lot of specific usage scenarios. The Cell processor also was one of the many sources of inspiration for the creation of our OS.
But we also have to recall that the companies IBM and Sony already spied on us since around 1998, when we were creating our Evoos, as we have explained in related publications about the Unified Modeling Language (UML), our Model Driven Architecture (MDA), an Integrated Development Environment (IDE) for the programming language Java, ontology-based applications, and much more in case of IBM, and at least the Artificial Intelligence roBOt (AIBO) in case of Sony.
In fact, we have not really wondered why this processor was called Cell.
But as we all do know that was never enough by far to stop our success story and after the publications of our OS we have caught both companies in the act again and again.
Especially, missing in this field was a system architecture, system environment, which
makes this general idea of supercomputer, including computer cluster, parallel computing, and heterogenous computing the basis for the successors of the Internet, the World Wide Web (WWW), the Global Brain (GB), the Semantic (World Wide) Web (SWWW), Ubiquitous Computing (UbiC) and Internet of Things (IoT), etc.,
elimiates the bandwith issue of the system bus, and
has a lot of new utilizations.
Many other manufacturers of processors took what we added or even created with our OS for the further development of their products and services, such as for example
Nvidia with CUDA,
Apple with OpenCL,
AMD with GPNext, HSArchitecture, and ROCm,
Intel with oneAPI, and
co with SYCL, UXL, and so on
See the related quotes and comments below.
Furthermore, we also recall that the
basic properties of our OS include the
- fields of
- Cluster Computing (CC or ClusterC) and
- Kernel-Less operating system (KLos),
and the
- approach of
- Systems Programming using Address-spaces and Capabilities for Extensibility (SPACE), which allows the exchange of data by sharing pointers,
and also
OSA of our OS integrates all in one, including the
- memory management subsystem, and
- exception-less system call mechanism, including the
- kernel-less asynchronous variant,
- Asynchronous Input/Output (AIO) without context switch, and
- exception-less communication mechanism,
which was created as an essential part of our OS together with the Zero Ontology or Null Ontology, and the next generation of the coherent Ontologic Model (OM) for many reasons such as eliminating the significant foundational problem and deficit with context switching of the fields of operating system (os), Distributed System (DS) (e.g. Distributed Hash Table (DHT), Content-Addressable System (CAS), BlackBoard System (BBS) (e.g. Tuple Space System (TSS), and Parallel Computing System (PCS)), and the Arrow System (AS) of the TUNES OS.
All these specific facts in addition to all the other facts support our claims.
See the
OntoLinux Website update of the 1st of August 2007 to find the fields of mobile device, and Autonomous System (AS) and Robotic System (RS), and
OntoLinux More website update of the24th of August 2007 to find more of the fields of AS and RS,
which all use heterogeneous computing, as explained in the quotes about Open Computing Language (OpenCL) and Heterogeneous System Architecture (HSA).
See also the
section Exotic Operating System of the webpage Links to Software for our SASOS4Fun - Single Adress Space Operating System for Fun, which is integrated with our OS and its features, including multicore, GPGPU, and GPU as NLU, and
section Robotics of the webpage Links to Hardware.
We quote an online encyclopedia about the subject CUDA: "In computing, CUDA (originally Compute Unified Device Architecture) is a proprietary[1] parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for accelerated general-purpose processing, an approach called general-purpose computing on GPUs (GPGPU). CUDA API and its runtime: The CUDA API is an extension of the C programming language that adds the ability to specify thread-level parallelism in C and also to specify GPU device specific operations (like moving data between the CPU and the GPU).[2] CUDA is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements for the execution of compute kernels.[3] In addition to drivers and runtime kernels, the CUDA platform includes compilers, libraries and developer tools to help programmers accelerate their applications.
[...] CUDA-powered GPUs also support programming frameworks such as OpenMP, OpenACC and OpenCL.[5][3]
CUDA was created by Nvidia in 2006.[6] When it was first introduced, the name was an acronym for Compute Unified Device Architecture,[7] but Nvidia later dropped the common use of the acronym and no longer uses it.[...]
Background
The graphics processing unit (GPU), as a specialized computer processor, addresses the demands of real-time high-resolution 3D graphics compute-intensive tasks. By 2012, GPUs had evolved into highly parallel multi-core systems allowing efficient manipulation of large blocks of data. This design is more effective than general-purpose central processing unit (CPUs) for algorithms in situations where processing large blocks of data is done in parallel, such as:
cryptographic hash functions
machine learning
molecular dynamics simulations
physics engines
Ian Buck, while at Stanford in 2000, created an 8K gaming rig using 32 GeForce cards, then obtained a DARPA grant to perform general purpose parallel programming on GPUs. He then joined Nvidia, where since 2004 he has been overseeing CUDA development. In pushing for CUDA, Jensen Huang aimed for the Nvidia GPUs to become a general hardware for scientific computing. CUDA was released in 2006. Around 2015, the focus of CUDA changed to neural networks.[8]
Ontology
The following table offers a non-exact description for the ontology of CUDA framework.
[...]
[...]
Advantages
[...]
[...]
Unified virtual memory (CUDA 4.0 and above) [announced around February 2011]
Unified memory (CUDA 6.0 and above) [announced around November 2013]
Shared memory - CUDA exposes a fast shared memory region that can be shared among threads. This can be used as a user-managed cache, enabling higher bandwidth than is possible using texture lookups.[23]
[...]
[...]
Programming abilities
[...]
CUDA provides both a low level API (CUDA Driver API, non single-source) and a higher level API (CUDA Runtime API, single-source). The initial CUDA SDK was made public on 15 February 2007, for Microsoft Windows and Linux. [...]
[...]
[...]
Comparison with competitors
CUDA competes with other GPU computing stacks: Intel OneAPI and AMD ROCm.
Whereas Nvidia's CUDA is closed-source, Intel's OneAPI and AMD's ROCm are open source.
Intel OneAPI
oneAPI is an initiative based in open standards, created to support software development for multiple hardware architectures.[122] The oneAPI libraries must implement open specifications that are discussed publicly by the Special Interest Groups, offering the possibility for any developer or organization to implemente their own versions of oneAPI libraries.[123][124]
Originally made by Intel, other hardware adopters include Fujitsu and Huawei.
Unified Acceleration Foundation (UXL)
Unified Acceleration Foundation (UXL) is a new technology consortium working on the continuation of the OneAPI initiative, with the goal to create a new open standard accelerator software ecosystem, related open standards and specification projects through Working Groups and Special Interest Groups (SIGs). The goal is to offer open alternatives to Nvidia's CUDA. The main companies behind it are Intel, Google, ARM, Qualcomm, Samsung, Imagination, and VMware.[125]
AMD ROCm
ROCm[126] is an open source software stack for graphics processing unit (GPU) programming from Advanced Micro Devices (AMD).
See also
SYCL - an open standard from Khronos Group for programming a variety of platforms, including GPUs, with single-source modern C++, similar to higher-level CUDA Runtime API (single-source)
[...]
[...]"
Comment
But at the time of the introduction of CUDA and before the presentation of our OS, it was only utilized for rudimentary scientific computing, which basically were only standard linear algebra, like for example operations on matrices, Basic Linear Algebra Subprograms (BLAS), Automatically Tuned Linear Algebra Software (ATLAS), LINPACK and Linear Algebra Package (LAPACK), etc..
We quote an online encyclopedia about the subject OpenHMPP: "OpenHMPP (HMPP[1 [HMPP: A Hybrid Multi-core Parallel Programming Environment. Workshop on General Purpose Processing on Graphics Processing Units. [4th of October 2007]]] for Hybrid Multicore Parallel Programming) - programming standard for heterogeneous computing. Based on a set of compiler directives, standard is a programming model designed to handle hardware accelerators without the complexity associated with GPU programming. This approach based on directives has been implemented because they enable a loose relationship between an application code and the use of a hardware accelerator (HWA).
Introduction
The OpenHMPP directive-based programming model offers a syntax to offload computations on hardware accelerators and to optimize data movement to/from the hardware memory.
The model is based on works initialized by CAPS (Compiler and Architecture for Embedded and Superscalar Processors) [...].
[...]"
Comment
OpenHMPP focuses on GPGPU and the elimination of the difference for the user, while using one or more CPUs together with one or more GPUs.
The term programming model was substituted with programming standard and the field of heterogeneous computing was added as substitute for hardware accelerators, which means only GPUs in this context, to the quoted webpage on the 20th of June 2012.
But other computing paradigms such as Distributed Computing (DC), Ubiquitous Computing (UbiC) and Internet of Things (IoT), and Mobile Computing (MC or MobileC) were not a concern with such programming models like CUDA and OpenHMPP before the publication of our Ontologic System (OS) with its Ontologic System Architecture (OSA), Ontologic System Components (OSC), and Ontologic Applications and Ontologic Services (OAOS), and also Ontoscope (Os) and Ontoscope Components (OsC).
The same holds for the sharing of a workload by a CPU and a GPU, which are executing the same program.
Therefore, we would also differentiate between the more generalized heterogeneous computing and the more specialized hybrid computing with only 2 different types of Instruction Set Architecture (ISA) and one of them utilized as co-processor, included in the former field.
See the OntoLinux Website update of the 1st of August 2007 to find mobile device and robotics and the OntoLinux More website update of the 24th of August 2007 to find more robotics, which both use heterogeneous computing.
See also the Ontonics Website update of the 8th of October 2012, which informs about the SASOS4Fun architecture of the Innovation-Pipeline of Ontonics.
See also the section Exotic Operating System of the webpage Links to Software of the website ofOntoLinux for our SASOS4Fun - Single Adress Space Operating System for Fun, which is integrated with our OS and its features, including multicore and GPGPU as NLU.
See also the section Robotics of the webpage Links to Hardware of the same website.
We quote an online encyclopedia about the subject OpenCL of the 9th of June 2008 (first version): "Open Computing Language is a GPGPU utilizing language defined by Apple for release in OS X 10.6.[1]"
Comment
That was all.
We quote an online encyclopedia about the subject OpenCL of the 13th of June 2008 (fifth version): "OpenCL (Open Computing Language) is a language for GPGPU based on C created by Apple. It is scheduled to be introduced in Mac OS X v10.6[1], and has been proposed as an open standard, to be administered by the Khronos Group.
[...]
Quoted from an Apple press release[3]:
["]Snow Leopard further extends support for modern hardware with Open Computing Language (OpenCL), which lets any application tap into the vast gigaflops of GPU computing power previously available only to graphics applications. OpenCL is based on the C programming language and has been proposed as an open standard.["]
[...]"
Comment
That was all.
We quote an online encyclopedia about the subject OpenCL: "OpenCL (Open Computing Language) is a framework for writing programs that execute across heterogeneous platforms consisting of central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), field-programmable gate arrays (FPGAs) and other processors or hardware accelerators. OpenCL specifies programming languages (based on C99, C++14 and C++17) for programming these devices and application programming interfaces (APIs) to control the platform and execute programs on the compute devices. OpenCL provides a standard interface for parallel computing using task- and data-based parallelism.
[...]
Overview
OpenCL views a computing system as consisting of a number of compute devices, which might be central processing units (CPUs) or "accelerators" such as graphics processing units (GPUs), attached to a host processor (a CPU). It defines a C-like language for writing programs. Functions executed on an OpenCL device are called "kernels".[10]: 17 A single compute device typically consists of several compute units, which in turn comprise multiple processing elements (PEs). [...]
In addition to its C-like programming language, OpenCL defines an application programming interface (API) that allows programs running on the host to launch kernels on the compute devices and manage device memory, which is (at least conceptually) separate from host memory. [...]
In order to open the OpenCL programming model to other languages or to protect the kernel source from inspection, the Standard Portable Intermediate Representation (SPIR)[17 [SPIR - The first open standard intermediate language for parallel compute and graphics. [...] January 21, 2014]] [introduced in 2012 and as SPIR-V in 2015] can be used as a target-independent way to ship kernels between a front-end compiler and the OpenCL back-end.
[...]
OpenCL kernel language
[...]
History
OpenCL was initially developed by Apple Inc., which holds trademark rights, and refined into an initial proposal in collaboration with technical teams at AMD, IBM, Qualcomm, Intel, and Nvidia. Apple submitted this initial proposal to the Khronos Group. On June 16, 2008, the Khronos Compute Working Group was formed[41] with representatives from CPU, GPU, embedded-processor, and software companies. This group worked for five months to finish the technical details of the specification for OpenCL 1.0 by November 18, 2008.[42] [...]
[...]
Vendor implementations
Timeline of vendor implementations
June [9 through 13], 2008: During Apple's WWDC conference an early beta of Mac OS X Snow Leopard was made available to the participants, it included the first beta implementation of OpenCL [...].
December 10, 2008: AMD and Nvidia held the first public OpenCL demonstration [...].
[...]
[...]
Portability, performance and alternatives
[...]
The fact that OpenCL allows workloads to be shared by CPU and GPU, executing the same programs, means that programmers can exploit both by dividing work among the devices.[199 [A Survey of CPU-GPU Heterogeneous Computing Techniques. [2015]]] This leads to the problem of deciding how to partition the work, because the relative speeds of operations differ among the devices. Machine learning has been suggested to solve this problem: Grewe and O'Boyle describe a system of support-vector machines trained on compile-time features of program that can decide the device partitioning problem statically, without actually running the programs to measure their performance.[200 [A Static Task Partitioning Approach for Heterogeneous Systems Using OpenCL. [2011]]]
[...]"
Comment
We note
function and
kernel.
See the Feature-List #1 22nd of April 2008 of our OS for the list points
Multiprocessing (see Linux)
Parallel operating of graphic cards, [accelerators, adapters, or colloquially GPUs] and other multimedia cards, [accelerators, adapters, or colloquially MPUs] from different manufacturers
User centric migration of applications and data from one computing machine to another, even from a personal computer to a cell phone, or an automotive media center
Cluster functionality.
The second list-point also includes GPU utilized as AI-accelerator or NPU, sound cards, etc. due to our Evoos based on bionics (e.g. AI, ML, CI, ANN, etc.). Obviously, the term multimedia card does not refer to the memory card standard used for solid-state storage.
The third list-point is a slightly encrypted description of the field of Cloud Computing of the second generation (CC 2.0). Cloud Computing of the third generation (CC 3.0) and Cloud-native have been created with our Evoos and our OS.
See also the Ontonics Website update of the 2nd of June 2008, which informs about the MultiCore Competence (MCC) of the Innovation-Pipeline of Ontonics.
Please note once again that the feature-lists are only explicit summaries of implicit features of our OS. In fact, no new features were added to the foundational OSA since 2006.
See the OntoLinux Website update of the 1st of August 2007 to find mobile device and robot and the OntoLinux More website update of the 24th of August 2007 to find robot, which both use heterogeneous computing.
See also the section Exotic Operating System of the webpage Links to Software for our SASOS4Fun - Single Adress Space Operating System for Fun, which is integrated with our OS and its features, including multicore and GPGPU as NLU.
and webpage Links to Hardware for robotics.
As we have documented crystal clearly, the company Apple has taken our original and unique works of art as source of inspiration and blueprint for everything it does, that matches our technologies, goods, and services since more than 2 decades.
According to the date of publication of the Feature-List #1, 2 months are sufficient for such a large company to write such a proposal, which merely details the related features of prior art works and our OS. Therefore, OpenCL is also considered as an infringement of the overall copyright.
One can also see the adaption of makro-level solutions of the field of Distributed Computing (DC), specifically the field of Grid Computing (GC or GridC), for micro-level solutions, such as for example the partition and scheduling of computing tasks on a single heterogeneous computing system and the field of Autonomic Computing (AC), which was already created with our original and unique Evolutionary operating system (Evoos) with its Evolutionary operating system Architecture (EosA) in 1999.
We quote an online encyclopedia about the subject C++ Accelerated Massive Parallelism (C++ AMP): "C++ Accelerated Massive Parallelism (C++ AMP) is a native programming model that contains elements that span the C++ programming language and its runtime library. It provides an easy way to write programs that compile and execute on data-parallel hardware, such as graphics cards (GPUs).
[...]
The initial C++ AMP release from Microsoft requires at least Windows 7 or Windows Server 2008 R2[, which was released to manufacturing on the 22nd of July 2009].[1] [...]
[...]
The basic concepts behind C++AMP, like using C++ classes to express parallel and heterogeneous programming features, have been inspirational to the SYCL standard.
[...]
Comment
See the comments to the quotes about OpenCL above and SYCL below.
We quote an online encyclopedia about the subject SYCL: "SYCL (pronounced "sickle") is a higher-level programming model to improve programming productivity on various hardware accelerators. It is a single-source embedded domain-specific language (eDSL) based on pure C++17. It is a standard developed by Khronos Group, announced in March 2014.
Origin of the name
SYCL [...] originally stood for SYstem-wide Compute Language,[2] but since 2020 SYCL developers have stated that SYCL is a name and have made clear that it is no longer an acronym and contains no reference to OpenCL.[3]
Purpose
SYCL is a [...] cross-platform abstraction layer that builds on the underlying concepts, portability and efficiency inspired by OpenCL that enables code for heterogeneous processors to be written in a "single-source" style using completely standard C++. SYCL enables single-source development where C++ template functions can contain both host and device code to construct complex algorithms that use hardware accelerators, and then re-use them throughout their source code on different types of data.
While the SYCL standard started as the higher-level programming model sub-group of the OpenCL working group and was originally developed for use with OpenCL and [Standard Portable Intermediate Representation (]SPIR[) - The first open standard intermediate language for parallel compute and graphics. 21st of January 2014] [introduced in 2012 and as SPIR-V in 2015], SYCL is a Khronos Group workgroup independent from the OpenCL working group since September 20, 2019 and starting with SYCL 2020, SYCL has been generalized as a more general heterogeneous framework able to target other systems. This is now possible with the concept of a generic backend to target any acceleration API while enabling full interoperability with the target API, like using existing native libraries to reach the maximum performance along with simplifying the programming effort.
[...]
Software
Some notable software fields that make use of SYCL include the following (with examples):
Bioinformatics
- [...] A molecular dynamics software widely used in bioinformatics and computational chemistry. [...]
- [...] A molecular docking software that utilizes SYCL for accelerating computational tasks related to molecular structure analysis and docking simulations.[31]
- [...] Another molecular docking software that leverages SYCL to accelerate the process of predicting how small molecules bind to a receptor of a known 3D structure.[32]
Artificial Intelligence
- [...] software library that performs inference on various Large Language Models [...]
Automotive Industry
- ISO 26262: The international standard for functional safety of automotive electrical and electronic systems. SYCL is used in automotive applications to accelerate safety-critical computations and simulations, ensuring compliance with stringent safety standards.[34]
Cosmology
- CRK-HACC: A cosmological n-body simulation code that has been ported to SYCL. It uses SYCL to accelerate calculations related to large-scale structure formation and dynamics in the universe.[35]
[...]
Comparison with other Tools
The open standards SYCL and OpenCL are similar to the programming models of the proprietary stack CUDA from Nvidia and HIP from the open-source stack ROCm, supported by AMD.[38]
In the Khronos Group realm, OpenCL and Vulkan are the low-level non-single source APIs, providing fine-grained control over hardware resources and operations. OpenCL is widely used for parallel programming across various hardware types, while Vulkan primarily focuses on high-performance graphics and computing tasks.[39]
SYCL, on the other hand, is the high-level single-source C++ embedded domain-specific language (eDSL). It enables developers to write code for heterogeneous computing systems, including CPUs, GPUs, and other accelerators, using a single-source approach. This means that both host and device code can be written in the same C++ source file.[40]
CUDA
By comparison, the single-source C++ embedded domain-specific language version of CUDA, which is named "CUDA Runtime API," is somewhat similar to SYCL. In fact, Intel released a tool called SYCLOMATIC that automatically translated code from CUDA to SYCL.[41] However, there is a less known non-single-source version of CUDA, which is called "CUDA Driver API," similar to OpenCL, and used, for example, by the CUDA Runtime API implementation itself.[38]
SYCL extends the C++ AMP features, relieving the programmer from explicitly transferring data between the host and devices by using buffers and accessors. This is in contrast to CUDA (prior to the introduction of Unified Memory in CUDA 6), where explicit data transfers were required. Starting with SYCL 2020, it is also possible to use USM instead of buffers and accessors, providing a lower-level programming model similar to Unified Memory in CUDA.[42]
SYCL is higher-level than C++ AMP and CUDA since you do not need to build an explicit dependency graph between all the kernels, and it provides you with automatic asynchronous scheduling of the kernels with communication and computation overlap. This is all done by using the concept of accessors without requiring any compiler support.[43]
Unlike C++ AMP and CUDA, SYCL is a pure C++ eDSL without any C++ extension. This allows for a basic CPU implementation that relies on pure runtime without any specific compiler.[40]
[...]
ROCm HIP
ROCm HIP targets Nvidia GPU, AMD GPU, and x86 CPU. HIP is a lower-level API that closely resembles CUDA's APIs.[46] For example, AMD released a tool called HIPIFY that can automatically translate CUDA code to HIP.[47] Therefore, many of the points mentioned in the comparison between CUDA and SYCL also apply to the comparison between HIP and SYCL.[48]
ROCm HIP has some similarities to SYCL in the sense that it can target various vendors (AMD and Nvidia) and accelerator types (GPU and CPU).[49] However, SYCL can target a broader range of accelerators and vendors. [...]
[...]
[...]
OpenMP
OpenMP targets computational offloading to external accelerators,[57] primarily focusing on multi-core architectures and GPUs. SYCL, on the other hand, is oriented towards a broader range of devices due to its integration with OpenCL, which enables support for various types of hardware accelerators.[58 [OpenCL - The Open Standard for Parallel Programming of Heterogeneous Systems]]
[...]
[...]"
Comment
Despite the authors of SYCL tried to implement an own expression of idea, SYCL is even more similar to the related part of our OS than OpenCL already is, because the embedded Domain-Specific Language (eDSL) and all other DSLs have ontologies and Language Models (LMs), which are provided with our coherent Ontologic Model (OM) since the creation of our Evoos, which again enables both, the single-source and non-single-source approaches and developments in these fields of GPGPU and heterogenous computing.
Our Ontologic Programming (OP) paradigm makes this point even more crystal clear, because it is on a higher metalevel than CUDA, OpenCL, SYCL with eDSL, SPIR, GCN, HSA, ROCm, and so on.
The latter shows the causal link with this specific expression of idea, which is a part of our overall original and unique work of art titled Ontologic System and created by C.S. and also is underlying SYCL, which again shows that our Ontologic System (OS) with its Ontologic System Architecture (OSA) has been taken as source of inspiration and (to some extent as) blueprint for the implementation of SYCL.
Not surprisingly, the notable software fields are all fields related to our OS.
We quote an online encyclopedia about the subject Graphics Core Next (GCN): "Graphics Core Next (GCN)[1] is the codename for a series of microarchitectures and an instruction set architecture that were developed by AMD for its GPUs as the successor to its TeraScale microarchitecture. The first product featuring GCN was launched on January 9, 2012.[2]
[...] GCN requires considerably more transistors than TeraScale, but offers advantages for general-purpose GPU (GPGPU) computation due to a simpler compiler.
[...] GCN was also used in the graphics portion of Accelerated Processing Units (APUs), including those in the PlayStation 4 and Xbox One.
[...]
Microarchitectures
[...]
Command processing
Graphics Command Processor
The Graphics Command Processor (GCP) is a functional unit of the GCN microarchitecture. Among other tasks, it is responsible for the handling of asynchronous shaders.[10]
Asynchronous Compute Engine
The Asynchronous Compute Engine (ACE) is a distinct functional block serving computing purposes, whose purpose is similar to that of the Graphics Command Processor.[ambiguous]
Schedulers
[...]
Geometric processor
The geometry processor contains a Geometry Assembler, a Tesselator, and a Vertex Assembler.
The Tesselator is capable of doing tessellation in hardware as defined by Direct3D 11 and OpenGL 4.5 (see AMD January 21, 2017),[11] [...].
Compute units
[...]
[...]
Audio and video acceleration blocks
[...]
Unified virtual memory [or unified virtual address space]
In a preview in 2011, AnandTech wrote about the unified virtual memory, supported by Graphics Core Next.[18 [AMD's Graphics Core Next Preview: AMD's New GPU, Architecture For Compute. [...] December 21, 2011]
[...]
[Image caption:] GCN supports "unified virtual memory", hence enabling zero-copy, instead of the data, only the pointers are copied, "passed". This is a paramount HSA feature.
[...]
[Image caption:] AMD APUs with GCN graphics gain from unified main memory conserving scarce bandwidth.[19]
Heterogeneous System Architecture (HSA)
Some of the specific HSA features implemented in the hardware need support from the operating system's kernel (its subsystems) and/or from specific device drivers. For example, in July 2014, AMD published a set of 83 patches to be merged into Linux kernel mainline 3.17 for supporting their Graphics Core Next-based Radeon graphics cards. The so-called HSA kernel driver resides in the directory /drivers/gpu/hsa, while the DRM graphics device drivers reside in /drivers/gpu/drm[21] and augment the already existing DRM drivers for Radeon cards.[22] This very first implementation focuses on a single "Kaveri" APU and works alongside the existing Radeon kernel graphics driver (kgd).
[...]"
Comment
Somehow, the microarchitecture reminds us of our OS with its OSA.
Unified memory, unified virtual memory, or unified virtual address space
HSA came only in April 2013 and hence long after December 2011
See also the comment to the quote about OpenHMPP above for comparison of the memory management.
We quote an online encyclopedia about the subject Heterogeneous System Architecture: "Heterogeneous System Architecture (HSA) is a cross-vendor set of specifications that allow for the integration of central processing units and graphics processors on the same bus, with shared memory and tasks.[1 [AMD Unveils its Heterogeneous Uniform Memory Access (hUMA) Technology. [30th of April 2013]]] The HSA is being developed by the HSA Foundation, which includes (among many others) AMD and ARM. The platform's stated aim is to reduce communication latency between CPUs, GPUs and other compute devices, and make these various devices more compatible from a programmer's perspective,[2]: 3 [3] relieving the programmer of the task of planning the moving of data between devices' disjoint memories (as must currently be done with OpenCL or CUDA).[4]
CUDA and OpenCL as well as most other fairly advanced programming languages can use HSA to increase their execution performance.[5] Heterogeneous computing is widely used in system-on-chip devices such as tablets, smartphones, other mobile devices, and video game consoles.[6] HSA allows programs to use the graphics processor for floating point calculations without separate memory or scheduling.[7]
Rationale
The rationale behind HSA is to ease the burden on programmers when offloading calculations to the GPU. Originally driven solely by AMD and called the FSA, the idea was extended to encompass processing units other than GPUs, such as other manufacturers' DSPs, as well.
[...]
Overview
Originally introduced by embedded systems such as the Cell Broadband Engine, sharing system memory directly between multiple system actors makes heterogeneous computing more mainstream. Heterogeneous computing itself refers to systems that contain multiple processing units - central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), or any type of application-specific integrated circuits (ASICs). The system architecture allows any accelerator, for instance a graphics processor, to operate at the same processing level as the system's CPU.
Among its main features, HSA defines a unified virtual address space [unified virtual memory] for compute devices: where GPUs traditionally have their own memory, separate from the main (CPU) memory, HSA requires these devices to share page tables so that devices can exchange data by sharing pointers. This is to be supported by custom memory management units.[2]: 6-7 To render interoperability possible and also to ease various aspects of programming, HSA is intended to be ISA-agnostic for both CPUs and accelerators, and to support high-level programming languages.
So far, the HSA specifications cover:
HSA Intermediate Layer
HSAIL (Heterogeneous System Architecture Intermediate Language), a virtual instruction set for parallel programs
similar[according to whom?] to LLVM Intermediate Representation and SPIR (used by OpenCL and Vulkan)
[...]
HSA memory model
compatible with C++11, OpenCL, Java and .NET memory models
[...]
designed to support both managed languages (e.g. Java) and unmanaged languages (e.g. C)
[...]
HSA dispatcher and run-time
designed to enable heterogeneous task queueing: a work queue per core, distribution of work into queues, load balancing by work stealing
any core can schedule work for any other, including itself
significant reduction of overhead of scheduling work for a core
Mobile devices are one of the HSA's application areas, in which it yields improved power efficiency.[6]
Block diagrams
The illustrations below compare CPU-GPU coordination under HSA versus under traditional architectures. [The same images are shown in the description of Graphics Core Next (GCN) but with other captions (see above).]
[...]
[Image caption:] HSA brings unified virtual memory and facilitates passing pointers over PCI Express instead of copying the entire data.
[...]
[Image caption:] Unified main memory, where GPU and CPU are HSA-enabled. This makes zero-copy operation possible.[8]
[...]"
Comment
Indeed, the
Embedded System (ES) is a highly integrated system on a circuit board and
Cell Broadband Engine (Cell BE) is a highly integrated System-on-a-Chip (SoC) circuit on a substrat.
See the quote and comment about Cell BE above.
See also the comment to the quote about Graphics Core Next (GCN) above for comparison.
Therefore, no original and unique work has been created and presented by Nvidia, AMD, Intel, and Co. in this specific context.
But we finally have the explanation what was going on, which gave us this impression mentioned in the beginning of this clarification, which again shows what has been taken from our OS in the related fields as well.
We quote an online encyclopedia about the subject Nvidia GameWorks: "Nvidia GameWorks is a middleware software suite developed by Nvidia.[1] The Visual FX, PhysX, and Optix SDKs provide a wide range of enhancements pre-optimized for Nvidia GPUs.[2] [...]
Components
Nvidia Gameworks consists of several main components:
VisualFX: For rendering effects such as smoke, fire, water, depth of field, soft shadows, HBAO+, TXAA, FaceWorks, and HairWorks.
PhysX: For physics, destruction, particle and fluid simulations.
OptiX: For baked lighting and general-purpose ray-tracing.
Core SDK: For facilitating development on Nvidia hardware.
[...]"
Comment
The quote has been added for completeness and better comparison with GPUOpen (see below).
We quote an online encyclopedia about the subject GPUOpen: "GPUOpen is a middleware software suite [...] that offers advanced visual effects for computer games.[...] GPUOpen serves as an alternative to, and a direct competitor of Nvidia GameWorks. GPUOpen is similar to GameWorks in that it encompasses several different graphics technologies as its main components that were previously independent and separate from one another.[2] However, GPUOpen is partially open source software, unlike GameWorks which is proprietary and closed.
History
GPUOpen was announced on December 15, 2015,[3][4][2][5][6] and released on January 26, 2016.
Rationale
[...] AMD's [...] Worldwide Gaming Engineering, argues that "it can be difficult for developers to leverage their [Research and Development (]R&D[)] investment on both consoles and PC because of the disparity between the two platforms" and that "proprietary libraries or tools chains with "black box" APIs prevent developers from accessing the code for maintenance, porting or optimizations purposes".[7] He says that upcoming architectures [...] "include many features not exposed today in PC graphics APIs".
AMD designed GPUOpen to be a competing open-source middleware stack [...]. The libraries are intended to increase software portability between video game consoles, PCs and also high-performance computing.[8]
Components
GPUOpen unifies many of AMD's previously separate tools and solutions into one package, also fully open-sourcing them [...].[4] GPUOpen also makes it easy for developers to get low-level GPU access.[9]
Additionally AMD wants to grant interested developers the kind of low-level "direct access" to their GCN-based GPUs, that surpasses the possibilities of Direct3D 12 or Vulkan. AMD mentioned e.g. a low-level access to the Asynchronous Compute Engines (ACEs). The ACE implement "Asynchronous Compute", but they cannot be freely configured under either Vulkan or Direct3D 12.
GPUOpen is made up of several main components, tools, and SDKs.[2]
[...]
Professional Compute
As of 2022, AMD compute software ecosystem is regrouped under the ROCm metaproject.
AMD Boltzmann Initiative: amdgpu (Linux kernel 4.2+) and amdkfd (Linux kernel 3.19+)
Software around Heterogeneous System Architecture (HSA), General-Purpose computing on Graphics Processing Units (GPGPU) and High-Performance Computing (HPC)
[...]"
Comment
We note
lack of general support for heterogeneous computing and AI accelerators by an operating system, because "many features not exposed today in PC graphics APIs",
Asynchronous Compute Engines (ACEs), and
metaproject.
Bingo!!! Bingo!!! Bingo!!!
We also note that it is not a middleware stack, because the illegal HSA has as a paramount feature unified memory with pointer sharing, which eliminates the layering approach according to the SPACE approach integrated in our OS as a paramount feature around 6 years before AMD.
Most potentially if not obviously, the related webpages have been fabricated by AMD, as can be seen be the repeating keywords and text passages. We also note that it follows our OS with its OSA, which is the reason why we got this impression mentioned in the beginning of this clarification all the time.
We quote an online encyclopedia about the subject ROCm: "ROCm[3] is an Advanced Micro Devices (AMD) software stack for graphics processing unit (GPU) programming. ROCm spans several domains: general-purpose computing on graphics processing units (GPGPU), high performance computing (HPC), heterogeneous computing. It offers several programming models: HIP (GPU-kernel-based programming), OpenMP (directive-based programming), and OpenCL.
[...] ROCm initially stood for Radeon Open Compute platform; however, due to Open Compute being a registered trademark, ROCm is no longer an acronym [...].
Background
[...]
ROCm was launched around 2016[5] on the 16th of November 2015 with the Boltzmann Initiative.[6] ROCm stack builds upon previous AMD GPU stacks; some tools trace back to GPUOpen and others to the Heterogeneous System Architecture (HSA).
Heterogeneous System Architecture Intermediate Language
HSAIL[7] was aimed at producing a middle-level, hardware-agnostic intermediate representation that could be JIT-compiled to the eventual hardware (GPU, FPGA...) using the appropriate finalizer. This approach was dropped for ROCm: now it builds only GPU code, using LLVM, and its AMDGPU backend that was upstreamed,[8] although there is still research on such enhanced modularity with LLVM MLIR.[9]
[...]
Software ecosystem
[...]
Third-party integration
The main consumers of the stack are machine learning and high-performance computing/GPGPU applications.
Machine learning
[...]
Supercomputing
[...]
Other acceleration & graphics interoperation
As of version 3.0, Blender can now use HIP compute kernels for its renderer cycles.[27]
[...]
Components
There is one kernel-space component, ROCk, and the rest - there is roughly a hundred components in the stack - is made of user-space modules. [...]
[...]
[...]
Comparison with competitors
ROCm competes with other GPU computing stacks: Nvidia CUDA and Intel OneAPI.
Nvidia CUDA
[...]
CUDA is able run on consumer GPUs, whereas ROCm support is mostly offered for professional hardware such as AMD Instinct and AMD Radeon Pro.
Nvidia provides a C/C++-centered frontend and its Parallel Thread Execution (PTX) LLVM GPU backend as the Nvidia CUDA Compiler (NVCC).
Intel OneAPI
[...]
Unified Acceleration Foundation (UXL)
Unified Acceleration Foundation (UXL) is a new technology consortium that are working on the continuation of the OneAPI initiative, with the goal to create a new open standard accelerator software ecosystem, related open standards and specification projects through Working Groups and Special Interest Groups (SIGs). The goal will compete with Nvidia's CUDA. The main companies behind it are Intel, Google, ARM, Qualcomm, Samsung, Imagination, and VMware.[45] "
Comment
HSA and ROCm also show nicely about what we are talking in this clarification: Several elements were missing and then added by our OS with its OS Architecture (OSA), and compilation, integration, and unification.
We quote an online encyclopedia about the subjects oneAPI and Unified Acceleration Foundation (UXL): "oneAPI is an open standard, adopted by Intel,[1] for a unified application programming interface (API) intended to be used across different computing accelerator (coprocessor) architectures, including GPUs, AI accelerators and field-programmable gate arrays. It is intended to eliminate the need for developers to maintain separate code bases, multiple programming languages, tools, and workflows for each architecture.[2][3][4][5]
oneAPI competes with other GPU computing stacks: CUDA by Nvidia and ROCm by AMD.
Specification
The oneAPI specification extends existing developer programming models to enable multiple hardware architectures through a data-parallel language, a set of library APIs, and a low-level hardware interface to support cross-architecture programming. It builds upon industry standards and provides an open, cross-platform developer stack.[6][7]
Data Parallel C++
DPC++[8][9] is a programming language implementation of oneAPI, built upon the ISO C++ and Khronos Group SYCL standards.[10] DPC++ is an implementation of SYCL with extensions that are proposed for inclusion in future revisions of the SYCL standard, including: unified shared memory, group algorithms, and sub-groups.[11][12][13]
Libraries
The set of APIs[6] spans several domains, including libraries for linear algebra, deep learning, machine learning, video processing, and others.
[...]
Hardware abstraction layer [(HAL)]
oneAPI Level Zero,[15][16][17] the low-level hardware interface, defines a set of capabilities and services that a hardware accelerator needs to interface with compiler runtimes and other developer tools.
[...]
Unified Acceleration Foundation (UXL) and the future for oneAPI
Unified Acceleration Foundation (UXL) is a new technology consortium that are working on the contiuation of the OneAPI initiative, with the goal to create a new open standard accelerator software ecosystem, related open standards and specification projects through Working Groups and Special Interest Groups (SIGs). The goal will compete with Nvidia's CUDA. The main companies behind it are Intel, Google, ARM, Qualcomm, Samsung, Imagination, and VMware.[29]
[...]"
Comment
The oneAPI was announced around December 2018 and the UXL was announced around September 2023.
We are very sure that our friends and readers are able to easily spot the fields of
Hardware Abstraction Layer (HAL) or nanokernel),
Real-Time operating system (RTos),
Capability-Based operating system (CBos),
Microkernel-Based operating system (MBos),
Kernel-Less operating system (KLos), and
Distributed operating system (Dos),
specifically the
capability-based microkernel L4 with HAL.
One can also see the connection with what is wrongly called Cloud, Edge, and Fog Computing (CEFC) and Cloud-native based on operating system level virtualization or containerization, operating system Virtual Machine (osVM), and so on.
See also the comment to the quote about SYCL above and the quote of the report about UXL and the comment to it below.
See the comments to the quotes about the foundations and the utilizations of oneAPI before show that they are already highly problematic from the legal point of view.
The strategy to specify an Application Programming Interface (API) has to be viewed as a dirty trick in the context discussed in this clarification, which does not work, because we have shown a causal link with our expression of idea, compilation, integration, unification, OS Architecture (OSA), and OS Components (OSC) of our sui generis masterpiece titled Ontologic System and created by C.S..
Our OS
was taken as source of inspriation and blueprint without referencing, and
is performed and reproduced without having the allowance and licensing.
Therefore, the exclusive moral rights respectively Lanham (Trademark) rights and other rights and properties of C.S. and our corporation have been infringement, specifically an interference with, and also obstruction, undermining, and harm of the exclusive exploitation (e.g. commercialization (e.g. monetization)).
See also the notes
There is only one OS and Ov of the 19th of September 2023,
OS and Os are originals of the 24th of January 2024, and
There is only one OS and Ov #2 of the 1st of April 2024.
We also quote a report, which is about the Unified Acceleration Foundation (UXL) and was publicized on the 26th of March 2024: "[...] Behind the plot to break Nvidia's grip on AI by targeting software
[...]
[...] More than 4 million global developers rely on Nvidia's CUDA software platform to build AI and other apps.
Now a coalition of tech companies that includes Qualcomm, Google and Intel plans to loosen Nvidia's chokehold by going after the chip giant's secret weapon: the software that keeps developers tied to Nvidia chips. They are part of an expanding group of financiers and companies hacking away at Nvidia's dominance in AI.
[...]
Starting with a piece of technology developed by Intel called OneAPI, the UXL Foundation, a consortium of tech companies, plans to build a suite of software and tools that will be able to power multiple types of AI accelerator chips, executives involved with the group told [a media company]. The open-source project aims to make computer code run on any machine, regardless of what chip and hardware powers it.
"It's about specifically - in the context of machine learning frameworks - how do we create an open ecosystem, and promote productivity and choice in hardware," Google's director and chief technologist of high-performance computing, Bill Magro, told [a media company] in an interview. [...]
[...] These executives stressed the need to build a solid foundation to include contributions from multiple companies that can also be deployed on any chip or hardware.
Beyond the initial companies involved, UXL will court cloud-computing companies platforms such as Amazon.com Amazon's Amazon Web Services and Microsoft's Azure, as well as additional chipmakers.
[...] Intel's OneAPI is already useable, and the second step is to create a standard programming model of computing designed for AI.
UXL plans to put its resources toward addressing the most pressing computing problems dominated by a few chipmakers, such as the latest AI apps and high-performance computing applications. Those early plans feed in to the organization's longer-term goal of winning over a critical mass of developers to its platform.
[...]"
Comment
We also quote an online encyclopedia about the subject OpenAI: "[...]
Stargate and other supercomputers
Stargate is a potential artificial intelligence supercomputer in development by Microsoft and OpenAI.[251 [Microsoft, OpenAI plan $100 billion data-center project, media report says. Reuters. [29th of March 2024]]] Stargate is designed as part of a greater data center project, which could represent an investment of as much as $100 billion by Microsoft.[252 [Microsoft and OpenAI Plot $100 Billion Stargate AI Supercomputer. [29th of March 2024]]]
Stargate is reported to be part of a series of AI-related construction projects planned in the next few years by the companies Microsoft and OpenAI.[252] The supercomputers will be constructed in five phases.[251] The fourth phase should consist in a smaller OpenAI supercomputer, planned to launch around 2026.[251] Stargate is the fifth and final phase of the program, and will take five and six years to complete and is slated to launch around 2028.[252]
The artificial intelligence of Stargate is slated to be contained on millions of special server chips.[252] The supercomputer's data center will be built in the US across 700 acres of land.[252] It has a planned power consumption of 5 gigawatts, for which it could rely on nuclear energy.[252] The name "Stargate" is a homage to the 1994 sci-fi film Stargate.[252]
Comment
We already explained many years ago the sources of inspiration for the logo of Ontologics.
Our fans and readers should be able to easily spot the 2 important symbols of the Stargate saga, which are the 7th symbol of the Abydos gate point of origin with the meaning "The day of our success" and the symbol of the Earth Alpha Gate point of origin.
We also add for the fun: Systems Programming using Address-spaces and Capabilities for Extensibility (SPACE) approach with its primitives spaces, domains, and portals.
One can also see our
Ontologic Net (ON) with Interconnected supercomputer (Intersup), Bionics (e.g. AI, ML, CI, ANN, etc.), Cybernetics, and Ontologics, and
Ontologic Web (OW),
and our Space and Time Computing and Networking (STCN) paradigm.
Conclusion
We simply say: From Internet and kernel-less to Intersup and serverless.
The hardware manufacturers follow the software manufacturers and vice versa, and the software manufacturers follow us, which implies that the hardware manufacturers also follow us, which is now proven with this clarification.
The comments to the quotes about graphics card, AI accelerator, and CUDA taken together already provide a relatively precise white, yellow, or red line. More than (specialized) High-Performance Computing (HPC), Parallel Computing (PC or ParaC), Cluster Computing (CC or ClusterC), Master-Worker (MW), Client-Server (CS), Peer-to-Peer Computing (P2PC), and Grid Computing (GC or Grid), and some GPU programming, and vendor-specific heterogeneous computing did not exist before the publication of our OS in the end of October 2006, specifically no cloud, no (intra and inter) supercomputer infrastructure, Interconnected supercomputer (Intersup), ...
In fact, what we have here is an overall transition
transition on the level of software and hardware,
transition on the computing and networking,
transition from personal computer and mobile computer to (pervasive) supercomputer,
transition from operating system to Ontologic System,
transition from Interconnected network (Internet) and kernel-less to Interconnected supercomputer (Intersup) and serverless respectively OntoNet,
transition from World Wide Web (WWW), Global Brain (GB), and Semantic (World Wide Web) (SWWW) to OntoWeb, and
transition from Metaverse to Ontoverse.
In fact,
CUDA illegal in part, and
OpenCL illegal in whole or in part,
beyond the red line and not discussible
SYCL,
HSA,
ROCm,
oneAPI,
UXL,
etc.
Already available are for example
Open Multi-Processing (OpenMP),
CUDA,
Open Hybrid Multicore Parallel Programming (OpenHMPP),
Open ACCelerators (OpenACC),
C++ Accelerated Massive Parallelism (C++ AMP),
etc.,
which raises the question why another one is required and implemented. And the simple answer is, it is about the utilization of PUs and heterogeneous computing, and our Ontologic Net (ON) with its Interconnected supercomputer (Intersup). and so on.
None of them has seen that the multicore of GPU is more than just a co-prossesor or accelerator in contrast to us. We have seen that the basics are more a main processor or CPU of compute nodes of a supercomputer.
And this fact is reflected in the evolution of the GPU to the NLU and the supporting software.
This can be seen (i.a.) unified memory, unified virtual memory, or unified virtual address space, which adapts the parallel computing and supercomputing on this level.
Even if one of them constitutes prior art, then we still cannot see a relation to the specific usages 2. Cloud gaming, 4. Cloud Workstation, 5. Artificial Intelligence Cloud, and 6. Automated/Driverless car, and also Artificial Intelligence supercomputer, supercomputer infrastructure, intelligent cloud, etc. of specific usage of a GPU and other types of PUs.
And both, this AI supercomputer infrastructure and this intelligent cloud, are part of our
Ontologic Net (ON), including our Interconnected supercomputer (Intersup) and SoftBionics (SB),
Ontologic Web (OW), including our Universal Brain Space or Global Brain 2.0, formerly Global Brain Grid.
This also explains the
direction of development in general and
very high investments in particular.
This also explains
expression of idea, compilation, integration, unification, architecture, and components
exclusive moral rights respectively Lanham (Trademark) rights, exclusive and mandatory infrastructures of SOPR and Societies with foundational and essential facilities, technologies, goods, and services
We have a transition and this transition shows a causal link with our OS and its OSA, which have been used as source of inspiration and blueprint.
In the legal scope of ... the Ontoverse (Ov) these things have no place in the supercomputers, data centers, Intersups, and so on.
See also the Comment of the Day of the 25th of February 2024.
08:33 UTC+2
Sufficient to officially bust frauds and crimes
We already noticed that the whole discussion is not about the determination of the infringements of the rights and properties of C.S. and our corporation by other entities, that perform and reproduce the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S. without referencing, allowance and licensing, and other legally required actions, but about the demand to establish maximal freedom, openess, and fairness, or freedom of choice, innovation, and competition respectively a level playing field, and to democratize said personal rights and private properties.
But a closer look shows that they
do not want to compete,
do not want to reference and label properly,
do not want to have any restriction when performing and reproducing everything,
do not want to comply with the Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) Terms of Service (ToS) with the License Model (LM) of our Society for Ontological Performance and Reproduction (SOPR), specifically when it comes to the
- payment of FRANDAC royalties,
- exploitation (e.g. commercialization (e.g. monetization)), and
- utilization of the essential facilities,
do not want to pay damage compensations,
do not want to do anything else that is legally required,
which ultimately means nothing else than that they want to get the complete control respectively the personal and private exclusive rights and properties of C.S. and our corporation to dictate and enforce their terms and conditions.
That is totally irrational and ridiculous when governments and large companies demand that in general and when they ignore all laws and only want to protect their cliques with their (illegal) monopolies respectively walled gardens in particular, which we also called a capricious act of cold expropriation in case of governments and abuse of market power or just blackmailing in case of large companies.
But how long it will take for legal proceedings to be started and finalized is not relevant, because the plagiarisms and fakes will not be available anymore just from the moment the fraudulent and even serious criminal activities have been busted. :)
It is like the Volkswagen emissions scandal, also know as Dieselgate. All manipulated cars were removed from the showrooms immediately and the already sold cars got legal hardware and software as soon as possible, if possible.
See also the notes
SOPR cancels out-of-court agreements of the 17th of August 2024, and
Inability to act and stalemate support all claims of the 27th of August 2024.
10:39 UTC+2
SOPR considering 17% to 20% for Os variants w.i.c. AI phone, AIPC, etc.
Ontoscope (Os)
wrongly and illegally called (w.i.c.)
Artificial Intelligence (AI)
Artificial Intelligence (AI) Personal Computer (PC) (AIPC)
et cetera (etc.)
Our Society for Ontological Performance and Reproduction (SOPR) is also considering to increase the royalties, which are regulated by the (unofficial) Terms of Service (ToS) with its License Model (LM) of our SOPR.
So far, our SOPR considers a
relative share of 7% to 10% of the revenue generated with our Ontoscope Components (OsC) in the variants wrongly and illegally called smartphone, smarttablet, smartwatch, smartTV, smartcar, and so on, and also
relative share of 27% to 30% of the revenue generated with our Ontologic System Components (OSC) in the variants wrongly and illegally called Android, ChromeOS, MacOS, iOS, WatchOS, Vision OS, Windows, Harmony, and so on.
But that should not be taken for granted, because no legally required contract has been signed for getting the allowance and license for the performance and reproduction of certain parts of our Ontoscope (Os), which has been taken as source of inspiration and blueprint for
all kinds of materials without proper referencing or labelling, and
production of unauthorized plagiarisms and fakes.
More importantly, our SOPR is considering to adjust its (unofficial) License Model (LM) and ask a
relative share of 17% to 20% of the revenue generated with our Ontoscope Components (OsC) in the variants wrongly and illegally called AI phone (e.g. Apple iPhone, Samsung Galaxy, Honor Magic), Artificial Intelligence (AI) Personal Computer (PC) (AIPC) (e.g. Copilot Plus PC), AI tablet, and so on.
Furthermore, our SOPR rejects the licensing practice of other entities in relation to the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S., because our relative share for the performance and reproduction of Ontologic Applications and Ontologic Services (OAOS) is Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) (compare for example with licensing in the field of Software as a Service (SaaS) better known as appstore), but is incompatible with their fixed fee (on a monthly basis).
See also the note
Gemini for all models?! Always for free?! of the 14th of August 2024,
and the other publications cited therein.
As we always say: It is always better to collaborate with us. Or otherwise it becames more expensive.
Legal hint: As long no commercial relationship exists, as long our SOPR is legally allowed to set the royalties without violating the Sherman Act. Is not it? :)
16:46 and 27:59 UTC+2
% + %, % OAOS, % HW #17
We will enforce maximal protection and exclusivity of the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S. through the
national and international laws, regulations, and acts, as well as agreements, conventions, and charters,
rights and properties of C.S. and our corporation, and
Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) Terms of Service (ToS) with the License Model (LM) of our Society for Ontological Performance and Reproduction (SOPR).
In this relation, we have read a few but very interesting publications, which show once again that none of the entities concerned has any clue what to do, none of them, and therefore they can only infringe the rights and properties of C.S. and our corporation, specifically imitate us, abuse their illegal monopolies, conspire, and do other fraudulent and even seriouse criminal activities. There is nothing of them is true and legal at all.
But what is true is the fact that everybody is following us and now
talking about the cost of the exclusive and mandatory infrastructures of our SOPR and our other Societies and
understanding that predicting the next word is definitely not sufficient and required is our OS.
Howsoever, we will not wait for the moment to happen in the next few months that a Cognitive Agent System (CAS) as a cognitive chatbot, a Multi-Agent System (MAS), or the next essential element of our Evoos and our OS is presented by the usual bad actors.
Our SOPR also came to the conclusion that the discussion in relation to private law and a related potential out-of-court agreement could be continued regardless of the further legal steps in relation to criminal law, which we are preparing to establish facts as well, specifically that
C.S. and our corporation do exist and
Evolutionary operating system (Evoos) and Ontologic System (OS) are original and unique works of art created by C.S.,
and ideally that
original and unique masterpieces were taken as sources of inspiration and blueprints, and
plagiarisms and fakes were implemented alone and in collaboration to conduct unlawful acts by
- damaging, diluting, and devaluing, and even destroying the identities, authenticities, integrities, reputations, and momenta, as well as follow-up opportunities, works, and achievements of C.S. and our corporation, and
- interfering with, and also obstructing, undermining, and harming the exclusive moral rights respectively Lanham (Trademark) right of C.S. and our corporation,
and because companies themselves ended the negotiation in this regard by the
collaboration with governments and State-Owned Enterprises (SOEs),
publication of illegal Free and Open Source Software (FOSS), and
collaboration with other bad actors.
However, the outcomes of the
whole endeavour in general and
interim injunctions and affirmative actions for the rights and properties of C.S. and our corporation in particular
are still uncertain.
Alphabet (Google)
+1% for continuation of the partial plagiarism and fake of our OS called Android. The logos of Honor AI and Google Cloud side by side says it all, because it follows Samsung AI. In this sense: Nix, nada, nothing sort of "Encircle to Erase".
64% if we count correctly
Microsoft
+1% for Microsoft Graph (originally Office 365 Unified API), which is a Microsoft API developer platform, which again connects multiple services and devices, including Windows, Microsoft 365, and Azure. Copilot for Microsoft 365 uses Microsoft Graph, and continuation of the plagiarism and fake of our Ontologic Net (ON) and Ontologic Web (OW) called cloud-based supercomputing platform, Stargate, and other supercomputers.
69% if we count correctly
Amazon
+1% for continuation of the plagiarism and fake of our Ontologic roBot (OntoBot) called Alexa based on the LLM of Anthropic.
64% if we count correctly
Apple
64% if we count correctly
In addition, proper referencing respectively labelling is short before becoming mandatory (again).
See also the notes
Other labels for our works of art prohibited of the 8th of June 2024,
SOPR added clause for end of no labelling of the 11th of June 2024,
% + %, % OAOS, % HW #16 of the 16th of August 2024,
The big AI bluff has been busted, too of the 21st of August 2024,
Inability to act and stalemate support all claims of the 27th of August 2024,
WhatsoeverGPT bluff and hype is over of the 29th of August 2024, and
SOPR considering 17% or 18% for Os variants w.i.c. AI phone, AIPC, etc. of today.
By the way:
We have said more than often enough that we are not new to the business, we never come with empty hands to a party, and the legal scope of ... the Ontoverse (Ov), also known as OntoLand (OL) is governed by our SOPR.
If one wants a level playing field, then one must also get off a high pedestal and talk on an equal footing, but not just ignore others and dictate peace.
And if one wants something from us, then one must comply to our rules and ask us, but not just ignore us and take us for a ride.
Everybody knows that they all still put pants on one leg at a time and just cook with water and many already know that they managed to take on the emperor's new clothes and scorch the water once again.
The societies have to bear the costs of the damages, but not we.
09:00 and 10:00 UTC+2
Apple and Co. have no alllowance for modifications
There will be no separation of what is branded illegally as Apple Intelligence, what is designated misleadingly as "personal intelligence" system and "visual intelligence" system, and what is wrongly and illegally called generative Artificial Intelligence neither in case of the system architecture nor in case of the utilization for different purposes, because it is just our coherent Ontologic Model (OM), transformative, generative, and creative Bionics, and Ontologic roBot (OntoBot), Ontologic Search (OntoSearch) and Ontologic Find (OntoFind), Ontologic Computer-Aided technologies (OntoCAx), and other parts of our original and unique Ontologic System (OS), which will not be modified.
If the company Apple will not cancel its partnerships with implementers of plagiarisms and fakes of any part of our OS in favour of our original and unique, and therefore exclusive and mandatory Ontologic System Components (OSC) As Soon As Possible (ASAP), then our SOPR will blacklist the company Apple.
For sure, such a substitution respectively legally required correction
is not needed in case of a takeover of a plagiarist by our corporation and
will become even more obvious and clear in case Apple might or has to choose to pay its damage compensations with 64% of its companies shares.
The same holds for all other entities, specifically companies, that all have to get the allowance and license for the performance and reproduction of certain parts of our Ontologic System (OS) and utilize the exclusive and mandatory infrastructures of our SOPR and our other Societies with their set of foundational and essential facilities, technologies, goods, and services.
We also recall that their customers and users are also our customers and users, who have also to comply with
national and international laws, regulations, and acts, as well as agreements, conventions, and charters,
rights and properties of C.S. and our corporation, and
Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) Terms of Service (ToS) with the License Model (LM) of our Society for Ontological Performance and Reproduction (SOPR),
including our right to use raw signals and data, Personally Identifiable Information (PII), knowledge bases, belief bases, models, algorithms, etc. in compliance with said legal matters, as discussed multiple times in more than the last 7 years.
By the way:
It's not a trick - It's Ontologics
"Don't call it "AI"."
15:11 and 17:11 UTC+2
Investors caught in the act
We have caught national and multinational
investment banks,
investment advisory service providers,
investment management companies (private equity firms),
venture capitals,
angel investors,
and so on
in the act.
They are not telling the truth to keep the AI bubble alive.
We quote a transcript, which is about a talk of an investment bank and was publicized on the 11th of June 2024: "A skeptical look at AI investment
[Moderator:] Generative AI [...]
[...]
[Moderator:] [...] our Global Economics Research team estimates that generative AI could ultimately automate a quarter [25%] of all work tasks and boost US productivity by 9% and US GDP growth by 6.1% cumulatively over the next decade. [...]
But given the enormous cost to develop and run the technology with no so-called application for it yet found, questions about whether the technology will ever sufficiently deliver on this investment have grown. [...] [... An institute professor at the Massachusetts Institute of Technology (MIT)] estimated that only a quarter of AI-exposed tasks will be cost effective to automate over the next ten years, implying that AI will impact less than 5% of all work tasks and boost US productivity by only 0.5% and US GDP about 1% cumulatively over the next decade. [...]
[Institute Professor:] [...]
[...] the number of tasks that are going to be impacted by gen[erative] AI is not going to be so huge in the short run because a lot of the things that humans do are very multifaceted. So almost all of the things that we do in transport, manufacturing, mining, utilities has a very central component of interacting with the real world. And AI ultimately can help with that as well, but I can't imagine that being a big thing within the next few years.
[...] from the beginning, it's going to be pure mental tasks that are going to be affected. And those are not trivial, but they're not huge either. [...] on one of the most comprehensive studies [...] that codes what [...] generative AI technology [...] combined with other AI technologies and computer vision could ultimately do. [...]
[...] 20% [...] in the production process could be ultimately transformed or heavily impacted by AI. But that's a timeless prediction ultimately. [...] paper [...] looks at a subset of these technologies, the computer vision technologies, where we understand how they work and the cost is a little bit better. [...] [...] how quickly these things are going to be cost effective. Because something being ultimately done by generative AI with sufficient improvements doesn't mean it's going to be a big deal within five, six, seven years. [...] 20-25% of what is ultimately doable can be cost effectively automated by computer vision technologies within ten years.
[...] 4.5-4.6% of those 23% is going to be done in the short run and within the 10-year horizon.
[Moderator:] [...] over time, as technology evolves, you end up being able to do harder things and you end up being able to do them in a less costly way.
[Institute Professor:] [...] view [of ...] a scaling law. You double the amount of data, you double the amount of compute capacity, say, the number of GPU units or their processing power, and you're going to double the capacities of the AI model.
[...] what does it mean to double AI [capacities or] capabilities?
[...] "doubling data," what does that mean? If we throw in more data from [a social media platform] into the next version of [the plagiarism and fake of our Ontologic roBot called] GPT, that will be useful in improving prediction of the next word when you are engaged in some sort of informal conversation [...] need higher and higher quality data, and it's not clear where that data is going to come from. And wherever it comes, it's going to be easily and cheaply available to generative AI. [...]
[...] there could be very severe limits to where we can go with the current architecture. Human cognition doesn't just rely on a single mode[, which is also the reason why we call it one-trick pony]; it involves many different types of cognitive processes [(e.g. learning and reflecting)], different types of sensory inputs [respectively modalities], different types of reasoning [respectively logics]. So the current architecture of the large language models has proven to be more impressive than many people would have predicted, but I think it still takes a big leap of faith to say that, just on this architecture of predicting the next word, we're going to get something that's as smart as Hal in 2001 Odyssey. ["I'm sorry, Dave. I'm afraid I can't do that."]
[...] we know already the models. [...]
[...]
[Moderator:] But ultimately, there are people out there arguing that this technology is paving the way for superintelligence that can really accelerate innovation broadly in the economy. [...]
[Institute Professor:] [...]
[...] the premise that we are on a path towards some sort of superintelligence, precisely because of the reasons that I tried to articulate a second ago, that I think this one particular way of understanding and summarizing information [by predicting the next word] is only a small part of what human cognition does. It's going to be very difficult to imagine that a large language model is going to have the kinds of capabilities to pose the questions for itself, develop the solutions, then test those solutions, find new analogies, and so on and so forth.
[...] within a 20-, 30-year horizon, the process of science could be revolutionized by AI tools. But the way that I would see that is that humans have to be at the driving seat [(the human at the center)]. They decide where there is great social value for additional information and how AI can be used. Then AI provides some input. Then humans have to come in and start bringing other types of information and other real-world interactions for testing those [(e.g. cybernetics, feedback, Humanistic Computing (HC or HumanC), and evaluation and validation)]. And then once they are tested, some of those reliable models have to be taken into other things like drugs or new products and another round of testing and so on and so forth have to be developed [(e.g. iterative and incremental development, epistemology, and evolution)].
So if you're really talking about superintelligence then you must think that all of these different things can be done by an AI model[, which only predicts the next word,] within a 20-, 30-year horizon. Again, I find that not very likely.
[...]
[Moderator:] We have equity analysts forecasting a trillion-dollar spend. Is that money going to go to waste?
[Institute Professor:] [...] my paper and basic economic analysis suggests that there should be investment because many of the things that we are using AI for is some sort of automation [(e.g. Autonomic Computing (AC), and Robotic Automation (RA) or Robotic Process Automation (RPA), and also Autonomous System (AS) and Robotic System (RS))]. [...]
[...] investment boom, some of it is going to get wasted [...] some of it is going to be super useful because it's going to lay the seeds of that next phase [...] So I think the devil is in the detail.
[...]
[Head of Global Equity Research:] The biggest challenge is that, over the next several years alone, we're going to spend over a trillion dollars developing AI, you know, around the infrastructure, whether it's the data center infrastructure, whether it's utilities infrastructure, whether it's the applications. [...] What trillion-dollar problem is AI going to solve? This is different from every other technology transition [...].
Historically, we've always had a very cheap solution replacing a very expensive solution. Here, you have a very expensive solution that's meant to replace low-cost labor. And that doesn't even make any sense from the jump, right? And that's my biggest concern on AI at this point.
[...]
[Head of Global Equity Research:] There's nothing about AI that's cheap today, [...]. [...] [...] there's a lot of revisionist history on about how
things always start expensive and get cheaper. Nobody started with a trillion dollars. And there's examples of when there's a monopoly on the bottleneck of the technology, the technology costs don't always come down.
[...]
[...] It's really the GPU costs, the number that you have to use in order to run the data centers and then how much the chips cost.
[...]
[...] Even it does come down, the starting point of how expensive this technology is means costs have to come down an unbelievable amount to get to the point where this is actually affordable to automate some of these technologies.
[Moderator:] And ultimately, you don't really have a lot of expectation that it will be able to perform in terms of cognitive ability close to humans. [...]
[Head of Global Equity Research:] Many people want to say this is the biggest technology invention of their lifetime. [...] those
were transformative technologies that were fundamentally enabling you to do something different that you had ever done before. [...]
[...]
[Head of Global Equity Research:] [...] they showed the road map right away [...] from day one of the smartphone. [...] And it was out in the future that we were going to be able to do them, but we had identified [at least some of] the things that we were going to be able to do.
[...] But AI is pie-in-the-sky, big picture, "if you build it, they will come," just you got to trust this because technology always evolves. And we're a couple years into this, and there's not a single thing that this is being used for that's cost effective at this point. I think there's an unbelievable misunderstanding of what the technology can do today. The problems that it can solve aren't big problems. There is no cognitive reasoning in this. [...] people act like if we just tweak it a little bit, it's somehow going to -- we're not even in the same zip code of where this needs to be.
[Moderator:] Even if the benefits and maybe even the returns don't justify the costs, do the big tech companies that are spending this money today have any choice but to engage in the AI arms race, given the competitive pressures?
[Head of Global Equity Research:] [...] no, they don't have a choice, right? Which is why we're going to see the build-out continue for now. That's sort of what the technology industry does. [...] look at virtual reality. This would not be the first technology that didn't meet the hype, [...]. [...]
[...] There's just historical context. [...] Blockchain was supposed to be a big technology. A metaverse was supposed to be [...]. Those things are non-existent today from a technology use case standpoint. And just because the tech industry hypes something up doesn't really mean a lot. [Is not it?]
[...] if it does work and they haven't positioned themselves for it, they're going to be way behind. So there's a huge [Fear Of Missing Out (]FOMO[)] element to this which is powering all the hype. [...]
[...]
[Head of Global Equity Research:] [...] almost universally it's showing that there's not a lot that AI can do today. [...] there are people on different parts of the spectrum of anywhere from, "We shouldn't expect it to do anything today; it's such early stages. And technology evolves and finds a way" to the other side of the spectrum is, "We're several years into this and by this time it was supposed to be doing something." And everybody's on a different part of that continuum.
[...] there's very limited applications for how this can be used effectively. Very few companies are actually saving any money at all doing this. [...]
[...]
[Head of Global Equity Research:] [...] it's all on the infrastructure side. [...] we're still going to keep building AI. I don't think we're anywhere near done building it [...]. [...] [...] what I've been saying for two years is what I continue to say. Keep buying the infrastructure providers.
[Moderator:] And ultimately, [...] we are building all this infrastructure and capacity that at some point won't really be in demand.
[Head of Global Equity Research:] Right. The very nuanced view, right?
[Moderator:] What's that look like?
[Head of Global Equity Research:] Yeah, it looks bad. It looks exactly like 2001, '02, and '03 for the Internet build-out. Like people, again, it's a relevant discussion, right, when they want to talk about that "if you build it, they will come," and when we built the Internet and then 30 years [later] developed [a messy ride sharing company] and all those things are true, right? It ends badly when you build things that the world's not ready for [...]. And I don't know that it's as problematic simply because a lot of the companies spending money today are better capitalized than some of the companies that were spending money then. But when you wind up with a whole bunch of excess capacity because you built something that isn't going to get utilized, it takes a while. The world has to then grow back into that supply-demand balance.
[...] one of the biggest lessons I've learned over 25 years here is bubbles take a long time to burst. So the build of this could go on a long time before we see any kind of manifestation of the problem.
[Moderator:] What should investors be focused on to see a changing of the tea leaves here that you expect to come eventually?
[Head of Global Equity Research:] I think it'll be fascinating to see how long people can go with the "if you build it, they will come" approach [...]. At some point in the next 12 to 18 months, you would think there has to be a bunch of applications that show up that people can see and touch and feel that they feel like, "Okay, I get it now; here's how we're going to use AI."
Because again, investors are trying to use this in their everyday life, and there was a period a year ago where everybody was pretty excited about how asset managers could utilize AI. And I think if you interviewed an asset manager for this, most of them are going to tell you the same thing, which is, "We're struggling on how to figure out how to use it. We can't really find applications that make a ton of sense."
Again, there's isolated examples, models, and things in that nature but nothing significant. And so I think the longer it goes without any applications that are obvious to people or significant applications that are obvious to people, the more challenge. [...]
[Moderator:] [...] our US Internet equity research analyst, says that current [Capital Expenditure (]CapEx[)] spend as a share of revenues doesn't look markedly different from prior tech investment cycles. And [he] adds that the potential for returns from this CapEx cycle seems more promising than even previous cycles [...]. [Why?] [...]
[...] if AI's killer application fails to emerge in the next 6 to 18 months, he'll become more concerned about the ultimate payoff of all the investment we're currently seeing.
[...]"
We quote a report, which is about illegal investments in the exclusive and mandatory infrastructures of our SOPR and our other Societies and was publicized on the 20th of June 2024: "AI's $600B Question
The AI bubble is reaching a tipping point. Navigating what comes next will be essential.
In September 2023, I published AI's $200B Question[, which has become a $294B answer in December 2023]. The goal of the piece was to ask the question: "Where is all the revenue?"
At that time, I noticed a big gap between the revenue expectations implied by the AI infrastructure build-out, and actual revenue growth in the AI ecosystem, which is also a proxy for end-user value. I described this as a "$125B hole that needs to be filled for each year of [Capital Expenditure (]CapEx[)] at today's levels [(see also point 4 below)]."
[...] Has AI's $200B question been solved, or exacerbated?
If you run this analysis again today, here are the results you get: AI's $200B question is now AI's $600B question.
Note: It's easy to calculate this metric directly. All you have to do is to take Nvidia's run-rate revenue forecast and multiply it by 2x to reflect the total cost of AI data centers (GPUs are half of the total cost of ownership - the other half includes energy, buildings, backup generators, etc)^1. Then you multiply by 2x again, to reflect a 50% gross margin for the end-user of the GPU, (e.g., the startup or business buying AI compute from [Microsoft] Azure or [Amazon Web Services (]AWS[)] or [Google Cloud Platform (]GCP[)], who needs to make money as well).
What has changed since September 2023?
1. The supply shortage has subsided: Late 2023 was the peak of the GPU supply shortage. Startups were calling [...] anyone that would talk to them, asking for help getting access to GPUs. Today, that concern has been almost entirely eliminated. For most people I speak with, it's relatively easy to get GPUs now with reasonable lead times.
2. GPU stockpiles are growing: Nvidia reported in Q4 that about half of its data center revenue came from the large cloud providers. Microsoft alone likely represented approximately 22% of Nvidia's Q4 revenue. Hyperscale CapEx is reaching historic levels. These investments were a major theme of Big Tech Q1 '24 earnings, with CEOs effectively telling the market: "We're going to invest in GPUs whether you like it or not." Stockpiling hardware is not a new phenomenon, and the catalyst for a reset will be once the stockpiles are large enough that demand decreases.
3. OpenAI still has the lion's share of AI revenue: [A media company] recently reported that OpenAI's revenue is now $3.4B, up from $1.6B in late 2023. [But in "2023, a study estimated that operating ChatGPT would cost the company 700,000 dollars per day. And the more people use the chatbot, the higher these costs will rise. OpenAI announced just last week [in September 2024] that the weekly number of ChatGPT users has doubled since November [2023] to over 200 million users. However, the high level of investment in AI has so far been offset by [relatively] poor revenues. This year, OpenAI could lose five billion dollars - ten times as much as in 2022."] While we've seen a handful of startups scale revenues into the <$100M range, the gap between OpenAI and everyone else continues to loom large. [Though we are not sure if these revenue numbers are correct, because OpenAI has only 1 million ChatGPT users paying $20 per month.] Outside of ChatGPT, how many AI products are consumers really using today? Consider how much value you get from Netflix for $15.49/month or Spotify for $11.99. Long term, AI companies will need to deliver significant value for consumers to continue opening their wallets.
4. The $125B hole is now a $500B [$600B] hole: In the last analysis [in September 2023], I generously assumed that each of Google, Microsoft, Apple and Meta will be able to generate $10B annually from new AI-related revenue. I also assumed $5B in new AI revenue for each of Oracle, ByteDance, Alibaba, Tencent, X [(Twitter)], and Tesla. Even if this remains true and we add a few more companies to the list, the $125B hole is now going to become a $500B [$600B] hole.
5. It's not over - the B100 is coming: Earlier this year, Nvidia announced their B100 chip, which will have 2.5x better performance for only 25% more cost. I expect this will lead to a final surge in demand for [Nvidia] chips. The B100 represents a dramatic cost vs. performance improvement over the H100, and there will likely be yet another supply shortage as everyone tries to get their hands on B100s later this year.
One of the major rebuttals to my last piece was that "GPU CapEx is like building railroads" and eventually the trains will come, as will the destinations - the new agriculture exports, amusement parks, malls, etc. I actually agree with this, but I think it misses a few points:
1. Lack of pricing power: In the case of physical infrastructure build outs, there is some intrinsic value associated with the infrastructure you are building. If you own the tracks between San Francisco and Los Angeles, you likely have some kind of monopolistic pricing power, because there can only be so many tracks laid between place A and place B. In the case of GPU data centers, there is much less pricing power. GPU computing is increasingly turning into a commodity, metered per hour. Unlike the CPU cloud, which became an oligopoly, new entrants building dedicated AI clouds continue to flood the market. Without a monopoly or oligopoly, high fixed cost + low marginal cost businesses almost always see prices competed down to marginal cost (e.g., airlines).
2. Investment incineration: Even in the case of railroads - and in the case of many new technologies - speculative investment frenzies often lead to high rates of capital incineration. The Engines that Move Markets is one of the best textbooks on technology investing, and the major takeaway - indeed, focused on railroads - is that a lot of people lose a lot of money during speculative technology waves. It's hard to pick winners, but much easier to pick losers (canals, in the case of railroads).
3. Depreciation: We know from the history of technology that semiconductors tend to get better and better. Nvidia is going to keep producing better next-generation chips like the B100. This will lead to more rapid depreciation of the last-gen chips. Because the market under-appreciates the B100 and the rate at which next-gen chips will improve, it overestimates the extent to which H100s purchased today will hold their value in 3-4 years. Again, this parallel doesn't exist for physical infrastructure, which does not follow any "Moore's Law" type curve [(originally 18 months and actually 24 months)], such that cost vs. performance continuously improves.
4. Winners vs. losers: I think we need to look carefully at winners and losers - there are always winners during periods of excess infrastructure building. AI is likely to be the next transformative technology wave, and as I mentioned in the last piece, declining prices for GPU computing is actually good for long-term innovation and good for startups. If my forecast comes to bear, it will cause harm primarily to investors. Founders and company builders will continue to build in AI [...] during this period of experimentation.
A huge amount of economic value is going to be created by AI. Company builders focused on delivering value to end users will be rewarded handsomely. We are living through what has the potential to be a generation-defining technology wave. Companies like Nvidia deserve enormous credit for the role they've played in enabling this transition, and are likely to play a critical role in the ecosystem for a long time to come.
Speculative frenzies are part of technology, and so they are not something to be afraid of. Those who remain level-headed through this moment have the chance to build extremely important companies. But we need to make sure not to believe in the delusion that has now spread from Silicon Valley to the rest of the country, and indeed the world. That delusion says that we're all going to get rich quick, because AGI is coming tomorrow, and we all need to stockpile the only valuable resource, which is GPUs.
In reality, the road ahead is going to be a long one. It will have ups and downs. But almost certainly it will be worthwhile.
^1 Some commenters challenged my 50% assumption on non-GPU data center costs, which I summarized as energy costs. Nvidia actually came to the exact same metric, which you can see on Page 14 of their October 2023 analyst day presentation, published a few days after my last piece."
Comment
We are sure our fans and readers are able to see that this report follows the exchange written out in the transcript quoted above.
We quote a report, which is about the stock market and was publicized today: "[...]
Price gains for high-tech stocks drive US stock markets
[...] Yesterday, the US stock markets reacted surprisingly robustly to stubbornly high inflation. Prices initially dipped in early trading after so-called core inflation remained at a high level in August. However, prices then gradually recovered and the major share indices all ended trading higher - mainly thanks to the profits of large companies in the chip sector.
Statements by Nvidia boss Jensen Huang contributed to the positive mood in the US technology sector. At a conference held by US investment bank Goldman Sachs in San Francisco, he spoke of high demand for the flagship company's scarce chips in the field of artificial intelligence (AI). "In New York, the AI fantasy has returned to the trading floor with full force," commented [a] market analyst [...].
[...]"
Comment
So, so, interesting. We are sure that our fans and readers are able to connect the dots.
We quote a report, which is about an event and was publicized : "Live From San Francisco: Mark Zuckerberg Tapes a Podcast With 6,000 Friends
[...]
[...] the chief executive of Meta, tape a podcast about artificial intelligence, the metaverse and how he outmaneuvered the rest of Silicon Valley to keep his company winning.
[...]
Other scions of capitalism piped up too. Jamie Dimon, the chief executive of JPMorgan Chase, appeared on the jumbotron. Daniel Ek, the founder of Spotify, flew in from Sweden for the chat. Even Jensen Huang, the very busy chief executive of Nvidia, made a cameo appearance. [...]
[...]"
Comment
So, so, interesting. We are sure that our fans and readers are able to connect the dots.
Conclusion
There is absolutely no doubt that they are talking about our Ontologic System (OS) and the exclusive and mandatory infrastructures of our Society for Ontological Performance and Reproduction (SOPR) and our other Societies.
Endeavours in the fields like the blockchain technique, Virtual Reality (VR), what is wrongly called Metaverse, what is wrongly called Large Language Model (LLM), what is wrongly called chatbot, etc. did not work, because there were no other goals, strategies, plans, tricks, activities, and so on behind them than to steal our original and unqiue work of art titled Ontologic System and created by C.S.. That is there roadmap and nothing else.
Indeed, the realizations of our Ontoscope (Os), also wrongly called smartphone and AI phone, and Ontologic Net (ON) also wrongly called Cloud, were successful, but only from the physical and technological point of view, and not from the artistical, legal, and economical points of view, because nobody does the reckoning without the innkeeper and therefore damage compensations have to be paid, all illegal materials have to be transfered, and a lot of other legally required actions have to be done anyway.
The devil is not only in the detail.
They have no clue what to do.
They do not want to understand that our Evoos and our OS cannot be stolen.
They do not have a roadmap other than trying to steal our properties, our roadmap, and everything else. That is the reason why they occupy not a zip code but our Ontoverse, aka. OntoLand.
They have no other business goal, strategy, and plan than to infringe the rights and properties of C.S. and our corporation, including mimicking C.S. and our corporation.
As we already said in the Clarification of the 2nd of August 2024, "Like all the crypto crap start-ups, all these AI crap start-ups have no viable business model and are not sustainable due to the cost for computing power and so on. But the same also holds for already established companies. Their infringements of the rights and properties of C.S. and our corporation become even more obvious in this context, because all sources of income are part of our business models, which shows once again that they interfere with, and also obstruct, undermine, and harm the exclusive moral rights respectively Lanham (Trademark) rights (e.g. exploitation (e.g. commercialization (e.g. monetization)))."
Unbelievable, they warn about unconclusive and even crazy investments, and clearly say that "[t]here's just historical context", and simultaneously argue that "current [Capital Expenditure (]CapEx[)] spend as a share of revenues doesn't look markedly different from prior tech investment cycles [...] the potential for returns from this CapEx cycle seems more promising than even previous cycles", and recommend if this or that ... maybe, then ... keep on investing.
Get it into your heads: Our foundation of the so-called stochastic parrot and one-trick pony respectively WhatsoeverGPT based on the brute force approach is just funny and entertaining vaporware and also the reason why we went on in 2000 and created our OS.
"According to [a financial newspaper], [the] hedge fund Elliott recently informed its clients in a letter that artificial intelligence is massively overvalued ("overhyped"). Nvidia and the entire big tech sector were living in a "bubble land"."
Indeed, what we are once again observing this time is the Fear Of Missing Out (FOMO) like kindergarten children and rockstar groupies confirming once again the Greater Fool Hypothese, or better said, the Greater Fool Law. In this sense: Let us buy tulip bulbs, crypto coins, Non-Fungible Tokens (NFTs), and ... AI GPUs.
Indeed, this time is different, because they are willing to make the biggest mess and cause the greatest chaos in the financial sector of all time.
But that will definitely happen not at our expense and not by damaging, diluting, and devaluing, and even destroying the identities, authenticities, integrities, reputations, and momenta, as well as follow-up opportunities, works, and achievements of C.S. and our corporation.
We have warned about its making since several years, and now one can virtually see how the rolling ball is becoming a wrecking ball.
The investments of estimated 200 bn U.S. Dollar and realized 294 bn U.S. Dollar in 2023, and estimated at least 600 bn U.S. Dollar by others and 900 bn U.S. Dollar by us in 2024 are burned and lost for at least the 2 following reasons:
We arrranged it to have some more time to correct the legal matter, specifically by establishing Joint Ventrues (JV) respectively takeover of companies, without damaging the candidates. And instead of securing the assets, they invest in the candidates.
When anything would be realized at all in the next 6 over 12 to 18 months, then the hardware is old (see Moore's law). And we can already tell today that it is old.
The infrastructure, including what is wrongly called the "dedicated AI clouds", belongs to C.S. and our corporation anyway, because it is our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV) and they do not belong to the core businesses of other entities with some few and small, but more or less irrelevant exceptions proving the rule.
And we finance the building and operation of the exclusive and mandatory infrastructures of our Society for Ontological Performance and Reproduction (SOPR) and our other Societies with their set of foundational and essential facilities, technologies, goods, and services as some sort of philanthropy, because no other entity has the exclusive exploitation rights (e.g. commercialization rights (e.g. monetization rights) respectively the monopolistic pricing power and can afford this artistically, legally, technologically, and economically.
And we started with a trillion U.S. Dollar.
But those anti-social, bloody stupid, and arrogant entities do not understand such an attitude and action and hence they are trying to make sense out of it and keep investors at it no matter what.
And as a matter of principle, we do not feed all those trolls.
See also the notes
Nvidia delivers Chinese A800 GPU of the 10th of August 2023,
Some thoughts and notes of the 22nd of July 2024,
Investors want to steal our royalties and profits of the 4th of August 2024,
Investment banks, investors, asset managers, and Co. 'R' Us of the 7th of August 2024,
The big AI bluff has been busted, too of the 21st of August 2024,
Inability to act and stalemate support all claims of the 27th of August 2024,
WhatsoeverGPT bluff and hype is over of the 29th of August 2024, and
% + %, % OAOS, % HW #17 of the 8th of September 2024.
21:35 UTC+2
Some things are just too odd
"[...] a lawsuit filed [in December 2023] by The New York [Trolls] alleging that the [company OpenAI] violated copyright law by using [Trolls] journalism to train its systems [...]."
"In a demonstration for The New York [Trolls] [...], an OpenAI technical fellow, showed the chatbot solving [...] a kind of word puzzle [...]."
Somehow, there is once again something, which we do not buy.
Maybe it is the point that we would not report about something in a positive and supportive way, if said something violates our rights.
Or maybe it is just this fact, which we have already mentioned in the Clarification of the 2nd of August 2024: "If a [Large Language Model (]LLM[)] has halucination, then it has no reasoning and understanding, or is crazy or mad." The Chain-of-Thought (CoT) prompt engineering technique does not solve anything in relation to the basic problems, which are inherent in the brute force approach.
Furthermore, the specific problems, which this approach is able to solve, have nothing in common with reasoning. For example, the International Mathematical Olympiad sets tasks, which are standard problems with routine solutions, so that eventually it is only about brute force respectively computing power to predict the next word. That is also designated by the nonsense term artificial reasoning.
Report: "[OpenAI research scientist] alluded to the fact that OpenAI leveraged a new optimization algorithm [(our original and unique integration of the field of Evolutionary Computing (EC) and Computational Creativity, our Chat Genetic Programming technologies (GPx), also wrongly called chains of thought generation, and the basic properties of our OS of (mostly) being reflective, well-structured and -formed, and validated and verified respectively formal modeling and formal verification] and training dataset containing "reasoning data" and scientific literature specifically tailored for reasoning tasks."
Online encyclopedia: "o1 is designed to solve more complex problems by analyzing its answers, exploring [trying] different strategies, and refining its reasoning process."
Or maybe it is just the fact that is not a chatbot, but the part of our Cybernetics, Cognitive Agent System (CAS or CogAS) architecture, and transformative, generative, and creative Bionics, also wrongly and illegally called generative Artificial Intelligence (genAI), and also our Ontologic roBot (OntoBot), which were created, presented, and discussed with our original and unique, unforeseeable and unexpected, personal and copyrighted works of art titled Evolutionary operating system and Ontologic System, and is a unified Artificial Neural Network (ANN) Model (ANNM), specifically a connectionist symbol processing system and a functional symbolic architecture (e.g. connectionist Expert System (ES)), with a pinch of a Problem Solving Environment (PSE) and the optional integration with an Information Retrieval (IR) System (IRS) (e.g. Information Filtering (IF) System (IFS) (e.g. Recommendation System or Recommender System (RecS)), Search Engine (SE), and Question Answering (QA) System (QAS)), and also a Dialogue Manager (DM or DiaM) in particular and a Conversational Agent System (CAS or ConAS) in general, which again is also wrongly and illegally called a chatbot and generative conversational Artificial Intelligence.
See also the webpages of the
Ontologic data storage Base (OntoBase) component, specifically the section
and
webpage Links to Software, specfically the sections
of the website of OntoLinux.
See also the
Clarification
Clarification
Clarification
Clarification
So much about the truly original generative AI pioneer and other nonsense term "agentive Artificial Intelligence".
22:01 UTC+2
SOPR has no problems with processor, energy, and climate
In contrast to other entities, our Society for Ontological Performance and Reproduction (SOPR) and our other Societies have no problems with processor manufacturing, processor procurment, energy consumption, and climate protection in contrast to all others.
For example, our OntoLab, The Lab of Vision, has developed the successor technology of photolithography machines and some other disruptive breakthrough technologies already around 2014 to 2017.
We are also able to manufacture our processors, which are more advanced than for example Nvidia chips, on the basis of
freely available and publicly licensable microprocessor and processing unit (PU) core designs, including for example the Intellectual Properties (IPs) of the company ARM,
available photolithography machines and semiconductor fabrication plants (fab, foundry)
worldwide.
In fact, it is the same situation like in the fields of operating system, Internet, World Wide Web (WWW), Global Brain (GB), Bionics (e.g. AI, ML, CI, ANN, CV, etc.), and so on. We do have created the other revolutions as well.
As usual, no joke, no marketing, no blah blah blah, no vodoo, no mentalism, no politics, and as we always say: We never come with empty hands to a party. It is always better to collaborate with us. And it is even better to be part of us.
And we are very sure that a lot of countries and companies, if not all of them, will be happy to collaborate with us, because we have made the decisions, are making the decisions, and will make the decisions. :)
Welcome to the 21st Century.
Welcome to the New Reality (NR).
Welcome to the Ontoverse (Ov).
Welcome to the OntoLand (OL).
Welcome to the New World Order (NWO).
17:15 UTC+2
Comment of the Day
Universal Language Model (ULM)
Unified Language Model (ULM)
World Language Model (WLM)
Global Language Model (GLM)
We are preparing a related clarification, which once again is about the copyright law and the fields of
Large Language Model (LLM),
Universal Language Model (ULM) (see also the Investigations::Multimedia, AI and Knowledge management of the 21st of April 2016),
Unified Language Processing (ULP) (see also the Clarification Announcement of the 1st of May 2016), and
Global Brain Large Language Model or simply Global Language Model (GLM).
21:00 and 22:20 UTC+2
Microsoft should clean up OpenAI mess
The companies Microsoft and OpenAI declared in the beginning of December 2023 that Microsoft has no shares of OpenAI, but Microsoft is entitled to up to 49% of the profits respectively a share of 49% of the profits of the company OpenAI Global, LLC (capped profit company), which is the for-profit arm of OpenAI.
But reports are also calling Microsoft an investor without mentioning said difference.
Guess why it is called shareholder.
Maybe Microsoft has not to pay for losses and damage compensations, which would exceed its investments.
We also got the information that OpenAI changed a graphic on its website showing
"OpenAI, Inc [...] (OpenAI Nonprofit)" as owner "Owns" of "Holding company for OpenAI Nonprofit + employees + investors",
"Employees & other investors" as owner "Owns" of "Holding company for OpenAI Nonprofit + employees + investors",
"Holding company for OpenAI Nonprofit + employees + investors" as "Majority owner" of "OpenAI Global, LLC (capped profit company)", and
"Microsoft" as "Minority owner" of "OpenAI Global, LLC (capped profit company)" at first and in the beginning of December 2023 "Microsoft" as "Minority economic interest" in "OpenAI Global, LLC (capped profit company)".
But we still have the "Holding company for OpenAI Nonprofit + employees + investors" as "Majority owner" of "OpenAI Global, LLC (capped profit company)", which raises the question who is the minority owner of "OpenAI Global, LLC (capped profit company)".
We also wonder why Microsoft does not belong to the group of "Employees & other investors" and hence also "Owns" the "Holding company for OpenAI Nonprofit + employees + investors" despite it invested the largest amount of the early investments, which would be a share of around 49% of the "Holding company for OpenAI Nonprofit + employees + investors".
We are even more curious about the answer to the following question: If Microsoft has no shares of OpenAI, then why is it still
using its illegal Global Language Model (GLM) like an ordinary customer instead of building and using an own (illegal) GLM, and
supporting a company, which
- depends on the exclusive and mandatory infrastructures of our Society for Ontological Performance and Reproduction (SOPR) and our other Societies, specifically our Ontologic Net (ON), including our Interconnected supercomputer (Intersup) respectively supercomputing infrastructure and what is wrongly and illegally called Cloud-native infrastructure, and
- takes a share of the revenue and profit of Microsoft, or being more precise, of C.S. and our corporoation?
We demand the company Microsoft and the organization OpenAI to clean up the mess, including that nonsense ownership structure of OpenAI, because we will begin with cleaning up in this year.
OpenAI has to decide what it wants to be, either a
not-for-profit entity, research company, debate club, or whatsoever, that does its very own things, which definitely will not have any substantial similarity with our things anymore, or
subsidiary respectively business division.
Microsoft has to decide what it wants to be, either a
parent company respectively majority owner of OpenAI (unlimited profit company), or
customer.
We have also seen that support of and collaboration with other companies by Microsoft before for example since many years in case of many other start-ups and the company Nvidia and recently in case of the company Oracle.
That is all against the commercial practice and we have not seen such a nonsense before.
Although we have a relatively good knowledge about the organization, holding, and company OpenAI and also have looked at the webpage of an online encyclopedia about the subject ChatGPT, we have overlooked something very interesting: "Platform[:] Cloud computing platforms".
In relation to the
exclusive and mandatory infrastructures of our Society for Ontological Performance and Reproduction (SOPR) and our other Societies,
platforms, including our Marketplace for Everything (MoE), specifically SoftWare Agent-Based System (SWABS) or simply Agent-Based System (ABS), including SoftWare Robotic System (SWRS, SoftWare robot, SoftWare bot, or Softbot) and chatbot, and models, and
Ontologic Applications and Ontologic Services (OAOS)
see also the Comment of the Day of the 25th of February 2024.
Neither Alphabet (Google), Microsoft, or OpenAI, nor any other company will get the rights and properties of C.S. and our corporation. With the plagiarisms and fakes of our coherent Ontologic Model (OM), including Foundational Model (FM), Capability and Operational Model (COM), MultiModal Artificial Neural Network Model (MMANNM), and Global Language Model (GLM), and our transformative, generative, and creative Bionics as generative Artificial Intelligence and chatbot, and also our operating system Virtual Machine (osVM), microService technologies (mSx), Kernel-Less Distributed System (KLDS), Cloud-native technologies (Cnx), and so much more the matter was already sufficiently explained and clarified, but with the latest plagiarisms and fakes of our GLMs as artificial reasoning and our Ontologic roBot (OntoBot) as Conversational Artificial Intelligence and agentive Artificial Intelligence the matter is not discussable anymore.
In relation to business partnerships, specifically with investors, see also the notes
Without complete damages no essential facilities of the 13th of June 2024,
SOPR will not give up on controls and damages of the 19th of June 2024,
% + %, % OAOS, % HW #16 of the 16th of August 2024,
The big AI bluff has been busted, too of the 21st of August 2024,
Inability to act and stalemate support all claims of the 27th of August 2024,
WhatsoeverGPT bluff and hype is over of the 29th of August 2024,
Support of illegal FOSS and start-ups is dealbreaker of the 3rd of September 2024,
% + %, % OAOS, % HW #17 of the 8th of September 2024,
and the other publications cited therein.
18:40 UTC+2
Just a recall of some things, that will not work
Because governments, religious communities, companies, and other entities give us the impression that they refuse to comply with the
national and international laws, regulations, and acts, as well as agreements, conventions, and charters,
rights and properties of C.S. and our corporation, and
Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) Terms of Service (ToS) with the License Model (LM) of our Society for Ontological Performance and Reproduction (SOPR),
we would like to recall what will not work anymore:
continuation of infringements of the rights and properties of C.S. and our corporation,
capricious regulations of the rights and properties of C.S. and our corporation,
unauthorized future developments based on the rights and properties of C.S. and our corporation.
The exclusive and mandatory infrastructures of our SOPR and our other Societies with their set of foundational and essential
facilities,
technologies,
goods, and
services
include our
Ontologic Net (ON),
Ontologic Web (OW), and
Ontologic uniVerse (OV),
and therefore all Data Centers (DCs), supercomputers, etc. with or without authorization and operating license of our SOPR, and also what is wrongly and illegally called Artificial Intelligence fabric, Artificial Intelligence supercomputer infrastructure, Artificial Intelligence hub, etc., etc., etc., and if an entity does not want to comply with the legal matters as listed once again before, then we will catch said entity at the
borders,
markets,
system interfaces,
and so on,
and then it will become very expensive (300%, damage compensations, which is the higher of royalties unpaid, profits generated, and business values increased, blacklisting, etc.), so that any illegal gain becomes a legal loss.
And forget those economic zones, because it is a globalized world and nobody is a self-sufficient island.
Sooner or later, every whale must come to the surface, where we are waiting and turning the colour of the sea.
Furthermore, if a regulation, charter, framework, and so on in relation to
cultural,
artistical,
spiritual,
devotional,
political,
industrial,
etc.,
values and principles, specifically in
non-secularistic belief systems,
capricious value systems,
despotic legal systems,
etc.,
violate the personal rights of C.S.then it is prohibited and one has to stop the performance and reproduction of the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S. or it will become even more expensive.
And before any of those regulations, charters, frameworks, and so on becomes relevant the Universal Declaration of Human Rights (UDHR) comes first and has to be followed, even if a metaphysical creator and a physical creator are present. Oh, what a pity.
19:10 UTC+2
All legal matters more or less correct as discussed
We concluded once again, but with more detail, precision, and clout, penetration power, or vigour, that the
legal scope of ... the Ontoverse (Ov),
Terms of Service (ToS),
License Model (LM),
legal scope of essential facilities,
height of damage compensations,
and so on
are correct as discussed, whereby the exception proves the rule.
And as also discussed correctly, the rest is merely the processing of the formalities now in progress.
09:25, 11:12, and 17:33 UTC+2
Naked facilities constitute no pricing power
We already mentioned several times that investors are selling hot air and do other serious criminal activities in relation to the exclusive and mandatory infrastructures of our Society for Ontological Performance and Reproduction (SOPR) and our other Societies with their set of foundational and essential
facilities (e.g. buildings, traffic lights, data centers, exchange points or hubs, transmission lines, and mobile network radio towers),
technologies (e.g. models, environments, systems (e.g. backbones, core networks, or fabrics, and satellite constellations), platforms, frameworks, components, and functions, and also Service-Oriented technologies (SOx) (e.g. as a Service technologies (aaSx) (e.g. business, capability, and operational models, systems, and platforms))),
goods (e.g. contents, data, software (e.g. applications), hardware (e.g. processors), devices, robots, and vehicles), and
services (e.g. as a Service (aaS) business, capability, and operational models).
And the situation definitely does not improve, if an investor
is already blacklisted, or
is a state-backed investment vehicle or sovereign wealth fund of an autocracy, or
both.
And we also caution that pricing power exists only through the control of the whole infrastructures, but not a single part, specifically in the legal scope of ... the Ontoverse (Ov).
Keep in mind
operating system (os), Virtual Machine (VM) (e.g. osVM, microVM), Associative Memory (AM) (e.g. BlackBoard System (BBS) (e.g. Tuple Space System (TSS), Linda-like System (LlS), central of Multi-Agent System (MAS))), ABS as VM, etc.,
microService (mS), kernel-less Distributed os, serverless, aaS (e.g. FaaS), etc.,
KG, LM, AI, ML, CI, ANN, EC (e.g. GP), etc.,
CI and Soft Computing (SC) with GP instead of GA (e.g. LLM with thinking or reasoning),
MultiModal, Large Language Model (LLM), etc.,
MultiLingual, Multiparadigmatic, etc.,
Global Brain Large Language Model or simply Global Language Model (GLM) (e.g. LLM trained on Internet, or WWW, or etc., or for Internet, or WWW, or etc., or both),
reflection, ANN,
formal modeling, ANN,
formal verification, ANN,
proof-carrying, ANN,
Conversational System, ANN,
5G, ANN (e.g. 5G NR, 5G NG, 6G, etc.),
office suite, ANN, LM, etc.,
ERP, ANN, LM, etc.,
and much more subsets of the power set, including all possible variants of combinations, integrations, unifications, and fusions of the basic functions with APIs of referenced technologies, goods, and services (if power set has n elements, then 2^n subsets, n=100 2^100=1.267.650.600.228.229.401.496.703.205.376), which includes all referenced fields and building blocks, and any length of funtions, procedures, process chains, etc.
created and integrated (unified, hybridized) alone and all in one means architecture, means compilation, means selection, collection, combination, design, composition, arrangement, and assembling, means creative act and originality, means copyrighted.
See also the note SOPR studied classic idea-expression lawsuits of the 19th of December 2023, specifically the case Feist Publications, Inc., v. Rural Telephone Service Co., 499 U.S. 340 (1991).
In addition we have all the other related creations such as
ontological argument or ontological proof,
Zero Ontology, Null Ontology, or Ontologic Zero (Ontological Zero),
Ontologic holon (Ontological holon or Ontoholon) and Onton,
self-reflection, self-image, or self-portrait,
bionic, cybernetic, and ontologic reflection, augmentation, and extension,
Belief System (BS),
Caliber/Calibre, and also
Ontologic Programming (OP) paradigm,
Ontologic Computing (OC) paradigm,
Ontoverse (Ov) and New Reality (NR),
- Ontologic Net (ON) with its
- Interconnected network (Internet) and SuperComputer, Interconnected supercomputer (Intersup) or Internet of the second generation (Internet 2.0), etc.,
- Ontologic Web (OW) with its
- World Wide Web of the third, fourth, fifth, etc. generation (Web 3.0, 4.0, 5.0, etc.),
- Global Brain of the second generation (GB 2.0) respectively GB 3.0, if the Semantic (World Wide) Web (SWWW) is viewed as GB 2.0,
- etc.
- Ontologic uniVerse (OV) with its
- Cyber-Physical System of the second generation (CPS 2.0), including
- Industrial Internet of Things (IIoT) and Industry 4.0,
- etc.,
- etc.,
and so on.
Furthermore, we have a huge amount of evidences, which are illegal syntheses of our synthesis or subsets of our power set (see again above) created, presented, and discussed with our OS, which we will take for our next synthesis of said illegal syntheses to carve in stone and immortalized in record the rights and properties of C.S. and corporation.
It is that easy and does not require millions of pages to prove.
See also the note Investors caught in the act of the 12th of September 2024.
11:32 UTC+2
5G is already our 5G NG, 5G NR, 6G - ToS apply
The memory of certain decision makers in politics and industries is (too) often remarkably short. Due to this reason, we recall from time to time that the 5th Generation mobile networks or 5th Generation wireless systems (5G) and future networks are already (based on) our
5th Generation mobile networks or 5th Generation wireless systems (5G) New Radio (5G NR),
5th Generation mobile networks or 5th Generation wireless systems (5G) of the Next Generation (5G NG), and
6th Generation mobile networks or 6th Generation wireless systems (6G)
and therefore basic parts of them belong to the exclusive and mandatory infrastructures of our Society for Ontological Performance and Reproduction (SOPR) and our other Societies with their set of foundational and essential facilities, technologies, goods, and services.
As a the Terms of Service (ToS) of our SOPR
See also the notes
Open Grid Alliance even serious criminal of the 9th of May 2024,
JVs with SOEs required for 5G NR, 5G NG, and 6G of the 25th of June 2024, and
5G NR, 5G NG, 6G regulated by ToS of the 10th of July 2024.
This also applies for what is wrongly and illegally called
cloud computing mobile computing networking, also called by others
carrier grade Communications Service Provider (ComSP) providing or even utilizing for example a Grid Computing (GC or GridC), or Cloud, Edge, and Fog Computing (CEFC)
carrier cloud for providing carrier-grade Telecommunications Service Provider cloud (TSP cloud or telco cloud),
carrier grade SDN-NFV platform,
carrier grade Telecommunications Service Provider cloud (TSP cloud or telco cloud), and
IMS (IP Multimedia Subsystem), Radio Access Network (RAN), and other network technologies, specifically if based on what is wrongly and illegally called Coud-native.
No cherry picking for Communication Service Providers (CSPs) and other Ontologic Applications and Ontologic Services Providers (OAOSPs).
All or nothing at all.
By the way:
The share of a potential JV established as alternative to the payment of damage compensations with for example T-Mobile US is definitely 51% or more.
21:15 UTC+2
U.S.A. cannot give what they do not own to others
United States of America (U.S.A.)
We have made crystal clear our position in relation to the
rights and properties, specifically the original and unique, visionary and unbelievable, unforeseeable and unexpected, personal and copyrighted, and prohibited for fair dealing and fair use works of art titled Evolutionary operating system and Ontologic System, and also
investment program series, specifically the
- Ontolab Vision Fund and
- Blitz Fund,
of C.S. and our corporation.
We also have made crystal clear our position in relation to other countries and their
obligations to comply with the
- national and international laws, regulations, and acts, as well as agreements, conventions, and charters,
- rights and properties of C.S. and our corporation, and
- Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) Terms of Service (ToS) with the License Model (LM) of our Society for Ontological Performance and Reproduction (SOPR)
resources, including oil, sunshine, sand, climate, money, people, etc., which we do not need and are not very interested in as part of very bad deals for us and for other reasons.
We also have made crystal clear our position in relation to politics, religions, freedoms, rights, properties, and so on.
Listed above are the established and proven rules, which have been discussed, negotiated, and signed over more than 100 years, and therefore we will not play those old, stupid, opportunistic games, but enforce said rights and properties and reign straigth through.
23:01 UTC+2
iR Rayfarer plagiarism and fake total disaster
We quote a report, which is about a partial plagiarism and fake of our Ontologic Sytem (OS) with its Ontoverse (Ov) and Ontoscope (Os) in the wearable variant iRaiment (IR) Rayfarer and was publicized on the 3rd of August 2023: "90 percent of Ray-Ban Stories owners aren't using Meta's smart glasses
A second generation of Ray-Ban Stories is still in the pipeline despite the current model's low user retention.
[Image caption:] Various technical issues impacting Meta's Ray-Ban Stories have contributed to poor user experience and a 13 percent return rate.
Meta is struggling to retain users for its Ray-Ban Stories smart glasses, with over 90 percent of consumers having seemingly abandoned the platform, according to a report by [a financial] Journal. Internal company documents viewed by the publication revealed that around 27,000 of the 300,000 units reportedly sold between September 2021 and February 2023 are still being regularly used each month. Last April, Meta was reported to have sold just 120,000 pairs of the Ray-Ban Stories - less than half its 300,000 goal at that time.
The sunglasses, which allow users to take pictures, listen to music, and send / receive Facebook and WhatsApp messages, have seemingly been blighted with various technical problems that have contributed to a poor user experience, according to the report. This includes issues with audio, voice commands, poor battery life, and importing media from other devices, with the document viewed by the Journal noting that the device has been returned by 13 percent of users.
According to the Journal, the document said that Meta was looking to investigate "why users stop using their glasses, how to ensure we are encouraging new feature adoption, and ultimately how to keep our users engaged and retained." The document also predicted that 394,000 Ray-Ban Stories would be sold during the product's lifetime, though the Journal claims Meta had targeted up to 478,000 in unit sales elsewhere.
The disappointing retention for Ray-Ban Stories is a further blow to Meta's Reality Labs, the division that oversees the project alongside its wider metaverse, AR, and VR developments. The division has already lost almost $8 billion in the first six months of 2023, and Meta executives are expecting to see losses "increase meaningfully" in 2024.
Despite these poor results, it seems Meta is still aiming to release a second generation of Ray-Ban Stories with improved cameras and battery life sometime next year. There's no confirmation if these will share the same $299 price as the first-generation release. The low retention figures bring the push for a next-gen model into question considering the company's existing losses - perhaps Meta has some tricks left up its sleeve to prevent Ray-Ban Stories from sharing the same fate as similar projects like Google Glass."
Comment
Total incompetence and even serious crime in action.
Total disaster with preannouncement, that only waited for a time and looked for a place to happen.
We quote a second report, which is about a partial plagiarism and fake of our of our Ontologic Sytem (OS) with its Ontoverse (Ov) and Ontoscope (Os) in the wearable variant iRaiment (IR) Rayfarer and was publicized today: "[...]
Meta introduced a series of new products during an event [...], including [...] a pair of prototype glasses with holographic technology built into the lenses and a series of enhancements to its artificial intelligence assistant, Meta AI.
[...]
The new products are Mr. Zuckerberg's attempt to meld his vision of what social networks can be with what is possible now. Though Meta has had some success selling its virtual reality headsets and a surprise hit in its Ray-Ban augmented reality glasses, Mr. Zuckerberg's metaverse is still years from reality.
[...]"
Comment
New York Trolls. At least, those dirty fellows learned that the partial plagiarism and fake of our Ontologic roBot (OntoBot) wrongly and illegally designated as Meta AI is not a chatbot in contrast to other creatures of the undemocratic lying press.
And our fans and readers do know that it is the vision, creation, expression of idea, compilation, integration, unification, fusion, architecture, etc. of C.S..
The integration of a Virtual Reality Environment (VRE) with the
partial plagiarism and fake of our Ontologic Scope (OntoScope) component and Ontoverse (Ov) wrongly and illegally called Metaverse and Multiverse, and
partial plagiarism and fake of our Ontologic roBot (OntoBot) and wrongly and illegally designated as Meta AI
are copyright infringements and therefore Meta (Facebook) gets no license.
Eventually, Meta (Facebook) is in the same situation like many other companies and therefore has no chance at all with its totally ridiculous, fraudulent, and even serious criminal activities and should be happy, if it is allowed to comply with the
national and international laws, regulations, and acts, as well as agreements, conventions, and charters,
rights and properties of C.S. and our corporation, and
Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) Terms of Service (ToS) with the License Model (LM) of our Society for Ontological Performance and Reproduction (SOPR),
and to utilize the exclusive and mandatory infrastructures of our Society for Ontological Performance and Reproduction (SOPR) and our other Societies.
This set of legal requirements also demand the transfer of all illegal materials, including the Ontoscope variant Ray-Ban Stories of Meta (Facebook) and EssilorLuxottica.
Or otherwise, if Meta (Facebook) still refuses to do so, then it will take us only around 6 months or even less time to enforce the rights and properties of C.S. and our corporation by for example preliminary injunction and cease and desist order, all of which are in preparation so that we are prepared, including the separation of its business and the distribution of the resulting management tasks, functions, and operations, and also advertisement profits of its business units under the Joint Venture (JV) partners of our SOPR.
Eventually, it is the decision of Meta (Facebook) to sign, pay the damage compensation or alternatively become a JV, and comply, or just go away and become a mistake of the past.
So much about somebody, who thought and still thinks he had outmaneuvered the rest of Silicon Valley to keep that company winning.
See also the notes
There is only one OS and Ov of the 19th of September 2023,
OS and Os are originals of the 24th of January 2024,
There is only one OS and Ov #2 of the 1st of April 2024,
Without complete damages no essential facilities of the 13th of June 2024,
SOPR will not give up on controls and damages of the 19th of June 2024,
Clarification of the 2nd of August 2024,
% + %, % OAOS, % HW #16 of the 16th of August 2024,
The big AI bluff has been busted, too of the 21st of August 2024,
Inability to act and stalemate support all claims of the 27th of August 2024,
WhatsoeverGPT bluff and hype is over of the 29th of August 2024,
Support of illegal FOSS and start-ups is dealbreaker of the 3rd of September 2024,
% + %, % OAOS, % HW #17 of the 8th of September 2024,
and the other publications cited therein.
14:25 UTC+2
DWD AI weather model infringement
Deutschen Wetterdienst (DWD)==German Weatherservice
Artificial Intelligence (AI)
A quick look shows easily that the essential part of the original and unique, visionary and unbelievable, unforeseeable and unexpected, personal and copyrighted, and prohibited for fair dealing and fair use works of art titled Evolutionary operating system and Ontologic System and created by C.S., comprising the
vision,
creation, expression of idea,
compilation (collection and assembling), selection, composition, integration, unification, fusion, architecture (of existing ideas, basic blocks, etc.),
components,
applications, serivces,
etc.,
specifically of
Bionics, Cybernetics, Ontologics, Epistemology, Physics,
Quality Management (QM), Dening circle, Plan-Do-Check-Act (PDCA) as iterative and incremental multi-loop,
Problem Solving Environment (PSE),
Machine Learning Model (MLM), Artificial Neural Network Model (ANNM),
physical model and simulation,
earth model and simulation, and
weather model and simulation,
for
training,
reflection,
feedback mechanism,
learning,
verification,
validation,
etc.,
have been taken as sources of inspiration and blueprints to make a plagiarism and fake respectively a work with substantially similarity and without referencing, authorization, and licensing.
These are some of the many big mistakes of all bad actors, because our Evoos and our OS are not just uncopyrightable ideas, but are copyrighted expressions of idea.
Another fact is that these expressions of idea created by C.S. are so exotic and different in comparison with every other work that
they are considered sui generis works of art on the one hand and
they are directly recognizable as the originals in comparison with every similar work, which is only a modification or an implementation, which again does not present a new expression of idea on the other hand.
This is the next of the many big mistakes of all bad actors.
Therefore, the infringement of the exclusive moral rights respectively Lanham (Trademark) rights and also the copyrights, including the exclusive rights for
naming and labelling,
- referencing respectively citation with attribution, and
- designation,
presentation,
modification, and
exploitation (e.g. commercialization (e.g. monetization)),
and also
performance, and
reproduction,
of C.S. and our corporation are obvious.
These are more of the big mistakes of all bad actors.
09:10 UTC+2
SOPR added regulations to ToS
Our SOPR added the following regulations to the Terms of Service (ToS):
Our SOPR reserves the right to take legal action against anyone and anything, anytime and anywhere.
Our SOPR reserves the right to publish all documents, including
- internal communication documented in corporation and
- evidence presented in court.
This is an amendment to the regulation related to damaging the goals and even threatening the integrity of our SOPR.
Our SOPR reserves the right to prohibit the performance and reproduction of any expression of idea created by C.S. by an extremist entity.
Our SOPR views the publication and the support of disturbing or even illegal Free and Open Source Software (FOSS) as a disturbance of the goals and even a threat of the integrity of our SOPR.
This is an amendment to the regulation related to performing and reproducing certain parts of the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S..
Our SOPR demands naming and labelling
- referencing respectively citation with attribution, and
- designation,
of Free and Open Source Software (FOSS), which is
based on or
integrated with
the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S..
Please note that the transfer of all illegal materials includes FOSS, which infringes the exclusive moral rights respectively Lanham (Trademark) rights and also the copyrights, including the exclusive rights for
naming and labelling,
- referencing respectively citation with attribution, and
- designation,
presentation,
modification, and
exploitation (e.g. commercialization (e.g. monetization)),
and also
performance, and
reproduction,
of C.S. and our corporation.
We are considering to remove the Main Contrator Model (MCM), because the related tasks belong to the exclusive and mandatory infrastructures of our SOPR and our other Societies and therefore it makes no sense in relation to other regulations of the ToS, like for example the regulations related to the establishment of a Joint Venture (JV) and so on.
14:11 UTC+2
99% 'R' Us + 1% Nvidia
Nividia has stolen so much of our original and unique, visionary and unbelievable, unforeseeable and unexpected, personal and copyrighted, and prohibited for fair dealing and fair use works of art titled Evolutionary operating system and Ontologic System and created by C.S. and is conspiring with so many other entities that it should be obvious for all entities concerned that a lot of matters related to Nvidia will change in the next future, for example in relation to
what is wrongly and illegally called
- Nvidia Drive,
- Omniverse,
- New General Catalogue (NGC),
- Nividia Graphics Processing Unit (GPU) Cloud, and
- this and that,
the whole licensing practice,
the
and so on.
Needless to say, this will also have a lot of implications and changes for its
business partners and customers with their data centers, networks, devices, applications, and services, and
investors and share holders with their
- payment of damage compensations, which for sure is the higher of royalties unpaid, profits generated, and business values increased by infringing the rights and properties of C.S. and our corporation alone and in collaboration or
- establishment of the Joint Venture (JV) between Nvidia (silicon GPU and CUDA) and our SOPR (all the rest) as alternative.
That was only seeing, grabbing, implementing, copying, and controlling, which means stealing, but neither providing choice, inventing or creating, nor competing.
Like the companies Microsoft, Apple, Alphabet (Google), Meta (Facebook), and maybe others Nvidia gets no own Ontologic System (OS).
And we already said that it is completely nonsense to abuse the market power, conduct wire fraud, and conspiry with other bad actors to establish a level playing field and simulate an ordinary technological progress and freedom of choice, innovation, and competition to protect the core business, walled garden, illegal monopoly, and ... market power on the basis which company sells which plagiarism and fake. And we already said that competiton does not take place for our rights and properties, but a level higher on top of our essential facilities and on the level of Artificial Intelligence (AI) accelerator or Neural Processing Unit (NPU), etc., and therefore and for other reasons no level playing field will be established on this lower level of the essential facilities provided and exploited exclusively by our SOPR.
And it is completely nuts that an entity pays a royalty to us, if we waive the rights and properties of C.S. and our corporation, so that no requirement to pay a royalty exists anymore. And it is even more mad that we should waive the rights and properties of C.S. and our corporation, if others pay a royalty for that, instead of enforcing said rights and properties.
And because they acted in illegal ways and have been caught in the act again and again, we are entitled to damage compensations, which is the higher of royalties unpaid, profits generated, and business values increased since the 1st of January 2007, and 1st of January 2014, and 1st of January 2017, and 1st of January 2020, and so on. For example, take the business value of today minus the business value at these dates and take from the differences 90 to 99% for us and 10 to 1% for the other party.
See also the notes
There is only one OS and Ov of the 19th of September 2023,
OS and Os are originals of the 24th of January 2024,
There is only one OS and Ov #2 of the 1st of April 2024,
Without complete damages no essential facilities of the 13th of June 2024,
SOPR will not give up on controls and damages of the 19th of June 2024,
Clarification of the 2nd of August 2024,
% + %, % OAOS, % HW #16 of the 16th of August 2024,
The big AI bluff has been busted, too of the 21st of August 2024,
Inability to act and stalemate support all claims of the 27th of August 2024,
WhatsoeverGPT bluff and hype is over of the 29th of August 2024,
Support of illegal FOSS and start-ups is dealbreaker of the 3rd of September 2024,
% + %, % OAOS, % HW #17 of the 8th of September 2024,
and the other publications cited therein.
By the way:
What the heck thought the companies Microsoft, Alphabet (Google), Amazon, Meta (Facebook), Adobe, and so they are doing and they are being allowed to do?
No judge on this planet Earth will let them steal rights and properties worth more than 10 trillion U.S. Dollar from us. And the damage compensations are already the total control of their companies by C.S. and our corporation.
| |
|