From all angles
A few weeks ago, we shared the foundational principle of our artificial intelligence platform, Golineuro AI: computer vision.
Today, we’ll delve a bit deeper into how the platform itself operates. To collect the data that will be classified and analyzed in Golineuro AI, it’s necessary to use a series of sensors – similar to cameras but without recording or storing images – from which data is transmitted to the platform.
The platform incorporates a set of algorithms that extract information for different variables, such as classification or counts. The more specialized algorithms it has, the more precision can be achieved. For instance, consider algorithms related to camera angles. An algorithm for shape detection might specialize in zenithal images, while another may focus on lateral ones. This way, greater precision is attained based on the position of the sensors.
Since its original design, our platform doesn’t just include algorithms for different camera angles; it also encompasses algorithms for various lighting conditions, including low light and predominant colors (dusk, sunset, etc.).
Golineuro AI can operate in any context, lighting conditions, and with any camera angle.
This allows it to function in almost any indoor or outdoor context, regardless of the lighting conditions or camera angle. In this manner, neural networks can identify human shapes on a monitored surface, classify them, and then analyze their movements or actions without retaining the actual images or any identifying image data, only aggregated counters.