- It is not possible to load all data into SAP HANA. The data volumes are too large, and the HANA hardware is disproportionately expensive for this case. It is advisable to use an appropriately scalable and cost-effective solution for the mass of raw data, e.g. an Apache Hadoop cluster. Such clusters offer scalability over a large mass of inexpensive hardware and already consider the constant failure of individual parts in their architecture. For example, the Hadoop file system HDFS always stores the data multiple times and is therefore ready for operation even if individual servers fail. At the same time, the system must of course take into account that it does not count or deliver the data stored multiple times. Hadoop was designed for this purpose. Smaller extracts or aggregates of these large volumes of data on the Hadoop side can now be loaded into HANA or stored in memory on the Hadoop side, e.g. with the help of the Spark adapter, as used in SAP Vora.
- Not only data but also events must be orchestrated. The occurrence of certain events in the data (e.g. alarm messages) can trigger certain processes on both the big data side, the Hadoop cluster, and the BW side (reloading data, creating service messages, etc.). These events must be orchestrated on both sides so that processes can be defined across technologies. In addition to managing data transfers, the SAP Data Hub also offers joint management of events, jobs, and triggers.
You are currently viewing a placeholder content from YouTube. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationThe individual areas are controlled separately, but a complete cycle can look like this: a product is removed from the high rack, and the central crane takes it to the kiln, where it is processed (symbolized by flashing lights). From the kiln it goes to the sorting system, a photocell recognizes the color and it is sorted into one of three chutes accordingly. From there, it is removed by crane and taken to the high rack, where it is sorted again. Some of the individual areas are equipped with sensors such as light barriers or photocells for color recognition and motors.
Here is the link to the product homepage of this factory: https://www.fischertechnik.de/de-de/service/elearning/simulieren/fabrik-simulation-24v.
The factory is delivered fully assembled, but without control units and therefore without the corresponding programming. To control the factory, eight Siemens Logo 8 control units were connected to the corresponding sensors and motors. The control in the Logo language was then carried out by our basic expert, Peter Straub. This programming is quite complex, the language and factory components provided are very simple. There are only a few sensors. If, for example, the central crane is to be turned to the home position, it is simply turned in a certain direction for a certain time. As it cannot turn beyond a fixed stop, it is therefore in the 0 position. The components of the LOGO language are also very elementary (AND gates, OR gates, time delay, and the like). For example, there are no convenient variables that could be used for stock management. The wiring of the devices and the programming of the factory took about four weeks.
The next part of the blog will then deal with how data from this factory is now transferred to a big data cluster.