Discover the gold in your data!


You can tailor our services to get the best possible results!

These cover all topics relating to data: From simple analyses to highly complex AI algorithms. From creating indivi­dual data pipelines to setting up your cloud services and develo­ping complete software products.

Data analy­tics

Let your data speak to you!

You’ve probably heard the phrase “Data is the gold of the 21st century.” Unfort­u­na­tely, this only applies to a limited extent: Data must be of high quality and it must be unders­tood correctly to be worth its weight in gold. A descrip­tive data analysis is there­fore a top priority for us at the start of a new project! We let your data speak to you and find insights that were previously hidden.


/* Genera­tion of previously hidden insights
/* Clear visua­liza­tion of the results
/* What-if analyses and A/B scena­rios
/* Handling of all types and formats of data

Services in detail

This is the process of conver­ting raw data into under­stan­dable infor­ma­tion in order to gain meaningful insights. On this basis, we then generate propo­sals or recom­men­da­tions that form the basis for far-reaching business decis­ions.

Especi­ally at the begin­ning of projects, the question often arises as to the quality of an existing database and what can possibly be derived from it. Our analysts can quickly generate first-look insights that can be effici­ently expanded in several coordi­nated itera­tions.

The right presen­ta­tion is the key element for under­stan­ding the generated results across all stake­hol­ders. To do this, we use easy-to-under­stand diagrams and graphics and break down complex issues into the essen­tial infor­ma­tion.

What-if analyses are usually based on combi­na­tions of diffe­rent parame­ters. We provide inter­ac­tive dashboards that can be custo­mized with drop-down menus or sliders to effec­tively compare diffe­rent scena­rios based on various metrics.

We are able to work with all data formats, from simple csv files and all types of database systems to distri­buted data sets with Spark in the terabyte or even petabyte range. Data from diffe­rent systems can also be filtered, cleansed and merged.

Good and effec­tive commu­ni­ca­tion is essen­tial in order to under­stand scena­rios and objec­tives. We know how to ask the right questions, even beyond technical matters. This helps us to under­stand our custo­mers’ business processes and deliver the right results.

Data Science

Digitiza­tion was yesterday, today is algorith­miza­tion and cogni­fi­ca­tion!

Nowadays, simple digita­liza­tion is no longer enough to automate processes: the latest techno­lo­gies enable you to automate and optimize cogni­tive processes and decis­ions.

We develop the neces­sary algorithms indivi­du­ally for your appli­ca­tion. This enables you to achieve your business goals and get the most out of your data. You say “goodbye” to manual processes that overtax the human brain and become an intel­li­gent, data-based organiza­tion that makes decis­ions based on real facts.


/* Custo­mized develo­p­ment of state-of-the-art algorithms accor­ding to your needs
/* Utiliza­tion of the latest findings from science and practice
/* Develo­p­ment of artifi­cial intel­li­gence, machine learning, mathe­ma­tical optimiza­tion and much more.
/* Integra­tion of any data sources and processes

Services in detail

Predic­ting future demand for goods and services is valuable in a variety of indus­tries, and in many cases it is possible to identify patterns in histo­rical data that have a high proba­bi­lity of repea­ting over time. Time series analysis and regres­sion models of seaso­na­lity and trend can be used quickly and successfully in practice. We use state-of-the-art deep learning algorithms for a large number of catego­ries and to be able to take causal influen­cing factors into account.

Produc­tion facili­ties and supply chains offer a wide range of oppor­tu­ni­ties for the use of mathe­ma­tical optimiza­tion methods. When planning capacity and alloca­tion to machines, costs should be minimized and throughput maximized, taking into account opera­tional condi­tions and uncer­tainty in expected demand. Inven­tory manage­ment involves weighing up how high safety stocks can ensure timely availa­bi­lity without locking up too much capital. The optimal cutting of material to minimize waste is an example where impro­ving effici­ency also means impro­ving sustaina­bi­lity.

Automa­tion and predic­tion models are used in preven­tive and predic­tive mainten­ance in the produc­tion and mainten­ance of manufac­tured products. Using sensor data, our models learn to recognize whether products are installed correctly, whether repairs are needed or whether parts should be replaced.

The more changeable the market and the higher the number of products, whether in B2B or B2C, the greater the chall­enge of setting prices manually. However, a pricing model that learns passi­vely from sparse histo­rical data leads to incor­rect pricing and losses. For optimal and automated pricing almost in real time, we combine reinforce­ment learning with demand models that also take into account the depen­den­cies between products.

Cluster analyses that discover simila­ri­ties and classi­fi­ca­tion methods that identify user behavior can be used in a variety of ways for retailers and service provi­ders, such as in marke­ting and churn campaigns. Together with our custo­mers, we find further creative appli­ca­tions. Defining the influen­cing factors (feature enginee­ring) is an art in itself and the findings from the results are sometimes surpri­sing.

If the data sources are still analog, such as documents in paper form, algorithms for image and text recogni­tion can convert them into digital form. Diffe­rent methods are required for the pre-proces­sing of scanned documents, classi­fi­ca­tion of document types and recogni­tion of content data and conver­sion into struc­tured form, for example the automated capture of order data.

Data Enginee­ring

Use the latest techno­lo­gies so that your data contri­butes to your business success

Current data requi­re­ments in terms of speed, availa­bi­lity and security demand profes­sional handling of data. The overall project can only be successful if this approach is adapted to the objec­tive.


/* Creation of data pipelines and data storage
/* Linking software develo­p­ment and opera­tion
/* Compre­hen­sive scaling options
/* Optimized automa­tions

Services in detail

We under­stand all types of databases, e.g. relational, NoSQL or graph-based. Depen­ding on the situa­tion, we set these up centrally or distri­buted so that they can be scaled quickly and securely at any time. Incre­mental backups protect against data loss.

Whether Amazon AWS, Micro­soft Azure or Google GCP – our engineers set up pipelines and services in the right environ­ment for you. In addition to setting up the infra­struc­ture, we also focus on the corre­spon­ding security measures and testing the systems.

The processes between develo­p­ment and opera­tions are automated and optimized to ensure a smooth transi­tion. Compon­ents such as image building, versio­ning, testing and deploy­ment are triggered and validated via conti­nuous integration/continuous delivery (CI/CD) pipelines.

Our services are typically based on Docker contai­ners and are there­fore referred to as micro­ser­vices archi­tec­ture. This ensures that code can be deployed independently of the platform and is always executed in the correct environ­ment with the correct depen­den­cies. Depen­ding on the appli­ca­tion situa­tion (develo­p­ment, test, produc­tion), various environ­ment varia­bles can be used to provide diffe­rent infor­ma­tion such as database addresses or further services.

By setting up and using Kuber­netes clusters, we are able to automa­ti­cally provide the right number of replicas of container-based services with high availa­bi­lity and scale them in such a way that there is no overload. Load balan­cers take care of the optimal distri­bu­tion of requests to the services, genera­ting the corre­spon­ding responses at maximum speed. Kuber­netes is the de facto standard for container orchestra­tion.

In order to use our services effec­tively, they must be integrated into existing systems or linked to them. Various aspects relating to security, perfor­mance and acces­si­bi­lity need to be considered here. The focus is also on customer dialog.

Software develo­p­ment

Make the value of your data and algorithms acces­sible: with software!

Even the best idea can fail with a bad design! Take advan­tage of our exper­tise in the fusion of data science and software to turn your idea into a success!

In order for algorithms to be used effici­ently, they must be integrated into software. The software can either be designed in such a way that it provides as much added value as possible for users or in such a way that it is already as automated as possible and thus functions autono­mously. We accom­pany your project throug­hout its entire life cycle or take it over at any time, depen­ding on your wishes.


/* Optimal software design for your data science project
/* From user-centered products to autono­mous AI systems
/* Over the entire life cycle, from the initial idea to the final optimiza­tion
/* State-of-the-art IT techno­lo­gies for maximum perfor­mance

Services in detail


Always indivi­dual and geared towards optimiza­tion.

Discover and use our services to make your project a success: From strategy consul­ting to project manage­ment and change manage­ment, we offer every­thing you need to make your project a success and achieve your goals!

Project request

Thank you for your interest in m²hycon’s services. We look forward to hearing about your project and attach great importance to providing you with detailed advice.

We store and use the data you enter in the form exclusively for processing your request. Your data is transmitted in encrypted form. We process your personal data in accordance with our privacy policy.