February 21, 2019 | Blog

De sluipende kosten van ongeplande stilstand

Unplanned downtime

Ongeplande stilstand is de nachtmerrie van iedere productiemanager. Bij stilstand wordt de verwachte hoeveelheid productie niet gehaald. De productieplanning wordt omgegooid om de schade zoveel mogelijk te beperken. Monteurs moeten abrupt alles laten vallen om zo snel mogelijk de productie weer op gang te brengen. Onderdelen worden besteld en vervangen en klanten worden ingelicht dat ze langer op hun product moeten wachten. Na een stilstand wordt het verlies berekend aan de hand van productie-, stilstand- en arbeidskosten, de kosten van nieuwe onderdelen en eventueel extra kosten die zijn gemaakt doordat afspraken met klanten niet werden nagekomen. Op de stilstand wordt een cijfer geplakt. Ondanks de zorgvuldige calculatie onderschatten veel bedrijven de indirecte kosten van stilstand. Dit artikel daagt uit om verder te kijken dan de vaak-geziene kosten.

Kettingreactie

Een storing is vervelend voor de productie, maar daar stopt het vaak niet. Een stilstand kan effect hebben op de afnemer en diens klant. Wanneer een groothandelaar bijvoorbeeld een order plaatst, en de bestelling kan niet op tijd worden geleverd, dan zal de groothandelaar op zijn beurt zijn klanten – kleinhandelaars - moeten teleurstellen. De kleinhandelaar heeft hierdoor mogelijk lege schappen in zijn winkel en moet op zijn beurt klanten teleurstellen. Downtime brengt dus vaak een kettingreactie aan gevolgen op gang waar de productie- en onderhoudsafdeling zich niet altijd van bewust zijn.

Onnodige capaciteit

Veel bedrijven accepteren dat er een deel van de ongeplande downtime niet te voorkomen is, en calculeren dit in. Om toch de benodigde productie te halen, gebruiken ze meer assets dan nodig is. Wanneer downtime wel (deels) geëlimineerd kan worden, zou de plant met minder assets dezelfde productie kunnen behalen. Ook de kosten van de overbodige assets behoren dus tot de indirecte kosten van ongeplande stilstand.

Emotie

Daarnaast zorgt downtime voor veel emotie in de fabriek. Wanneer alles draait, is iedereen tevreden. Ontstaan er storingen of ongeplande stilstand, dan staat de productieafdeling vaak op z’n achterste poten. De onderhoudsafdeling moet abrupt reguliere werkzaamheden laten vallen, problemen zo snel mogelijk oplossen en de oorzaak zien te achterhalen. Hoe groter of complexer het probleem, hoe meer stress het oplevert wat kan leiden tot een negatieve werksfeer. Wanneer voortdurend ongeplande storingen optreden kan dit zorgen voor minder motivatie bij werknemers en in het slechtste geval tot vrijwillig vertrek. Dit leidt tot verlies van kennis en ervaring in een sector waarin kennisbehoud steeds belangrijker wordt.

Voorkomen ongeplande downtime

Ongeplande downtime kost dus meer dan alleen productieverlies. Het is van belang de verborgen kosten te kwantificeren en mee te nemen in de calculatie van kosten van ongeplande stilstand. Op deze manier wordt inzichtelijk dat de kosten van ongeplande stilstand nog hoger zijn oorspronkelijk gedacht en kan het belang van het voorkomen van ongeplande stilstand nog meer worden onderstreept binnen het bedrijf.

Meer weten over het elimineren van ongeplande stilstand? Bekijk onze oplossing, volg ons op LinkedIn of plan een belafspraak.

February 19, 2019 | Blog

Onderhoudsstrategie

Condition-based maintenance

.

Door een toenemende beschikbaarheid van data is er de laatste jaren steeds meer mogelijk. De juiste analyse van de beschikbare data kan leiden tot betere inzichten met een hogere efficiency en sterkere concurrentiepositie tot gevolg. De onderhoudsstrategieën veranderen. Waar vroeger vooral correctief en preventief onderhoud werd toegepast, schuift dit geleidelijk aan naar conditiemonitoring en predictive maintenance. In dit artikel worden de voor- en nadelen van verschillende onderhoudsstrategieën uiteengezet, waardoor duidelijk zal worden waarom Condition-Based Maintenance en Predictive Maintenance de onderhoudsmethoden van de nabije toekomst zijn.

Correctief onderhoud

Bij correctief onderhoud of het zogenoemde ‘brandjes blussen’ wordt weinig gebruik gemaakt van data. Een storing treedt op en de onderhoudsmonteur moet zo snel mogelijk de storing verhelpen. Correctief onderhoud brengt een aantal nadelen met zich mee. Bij een storing of wanneer een onderdeel kapot gaat, valt de productie ongepland stil. Dit kan behoorlijk veel stress opleveren. Afhankelijk van de ernst van de situatie moeten reserveonderdelen worden besteld en werkzaamheden ingepland. Dit kan zorgen voor een lange ongeplande downtime. De kosten – en verborgen kosten – kunnen snel hoog oplopen.

Preventief onderhoud

Om deze hoge kosten te vermijden, kiezen veel bedrijven voor een tweede strategie: preventief onderhoud. Bij deze strategie worden er vroegtijdig – nog voor er storingen optreden – inspecties uitgevoerd en onderdelen vervangen. De betrouwbaarheid en beschikbaarheid van de assets neemt daardoor toe.

Voor kritische assets waarbij de risico’s op falen niet worden geaccepteerd zal een preventief onderhoudsplan worden opgesteld. Een onderhoudsbeheerssysteem is aanwezig. Hierin staat duidelijk wat van de monteur wanneer wordt verwacht.
Desalniettemin zorgt ook preventief onderhoud voor onnodige kosten. Onderhoud wordt vroegtijdig (lees: te vroeg) ingepland om te voorkomen dat assets stilvallen. Aangezien onderhoud op dat moment nog niet noodzakelijk is en onderdelen te vroeg worden vervangen, gooit men in feite een deel van de levensduur van een machine of onderdeel weg.

Balans van preventief en correctief onderhoud

Veel organisaties balanceren daarom correctief en preventief onderhoud. Kritische assets waarbij de betrouwbaarheid en beschikbaarheid erg belangrijk zijn, worden voorzien van een preventief onderhoudsplan. Bij minder kritische assets waarbij storingen snel zijn op te lossen, krijgt correctief onderhoud de voorkeur. Op deze manier probeert de onderhoudsorganisatie een optimum te bereiken waarin correctief en preventief onderhoud in balans zijn om zo de total costs of ownership zo laag mogelijk te houden.
Deze twee strategieën werden vroeger erg vaak gebruikt. Door de toenemende beschikbaarheid van data is er steeds meer mogelijk en worden de onderhoudsstrategieën condition-based maintenance en predictive maintenance vaker gebruikt.

Condition-based maintenance

Bij Condition based maintenance (CBM) wordt het onderhoud uitgevoerd op basis van de conditie van assets - die op haar beurt weer wordt vastgesteld door een vorm van conditiemonitoring. In deze situatie nemen bedrijven een specifiek asset onder de loep, bijvoorbeeld een motor of een pomp. Aan de hand van real-time data krijgen bedrijven inzicht in de conditie van het asset. Zo kunnen zij inschatten hoe lang deze nog probleemloos kan draaien. Op deze manier kan worden bepaald wanneer onderhoud moet worden uitgevoerd.

Bij CBM gaat er weinig of geen (rest)levensduur van een machine of onderdeel verloren. Het vergt echter veel data over de conditie van de machines. En als het op grote schaal toegepast wordt, is het noodzakelijke deze data geautomatiseerd te analyseren en om te zetten in informatie over de restlevensduur van assets. Op basis van alle informatie die beschikbaar is, kan de onderhoudsafdeling kunnen bepalen welke assets aan onderhoud toe zijn en met welke prioriteit specifieke assets aangepakt moeten worden.

Predictive maintenance

Predictive maintenance, of voorspellend onderhoud, is eveneens een strategie die gebruik maakt van data. In deze strategie wordt over het algemeen niet naar de afzonderlijke asset, maar naar een volledige vloot van assets gekeken. Om onderhoudsvoorspellingen te kunnen doen, wordt historische data geanalyseerd. Daarnaast wordt er rekening gehouden met (bijvoorbeeld) externe omgevingsfactoren op de verschillende locaties. Een motor die in het koude Noorden is geïnstalleerd, zal door de klimaatomstandigheden een lager aantal uren kunnen draaien vóór er onderhoud nodig is ten opzichte van eenzelfde type motor die in een warmer klimaat draait.

Predictive maintenance is erg geschikt om assets slim in te kopen, toekomstig onderhoud nauwkeurig in te plannen en voor assets waar conditiemonitoring niet mogelijk is, of waarvoor dat te kostbaar is. Het gaat bij predictive maintenance dus om het ontwikkelen van steeds betere inzichten in de faalfrequentie van assets op basis van historische data en de invloed van omgevingsfactoren daarop.

Begin op tijd

Predictive maintenance en CBM zijn de onderhoudsstrategieën van de toekomst. Afhankelijk van de toepassing en het doel is CBM dan wel predictive maintenance het meest geschikt. Voorwaarde voor beide strategieën is dat er voldoende betrouwbare data beschikbaar is. Daarom is het van vitaal belang dat bedrijven op tijd te beginnen met het inventariseren van manieren om deze data te verzamelen en effectief om te zetten tot inzichten.

 

February 7, 2019 | Blog

The best manufacturing strategy

Manufacturing

Automation, Artificial Intelligence, and Robotics are hot topics in the manufacturing space. The “Lights Out Factory” — the factory where the entire process is automated — is around the corner. Some may believe this idea is the eventual future of all manufacturing, and the robots will replace us all. We argue that this conclusion is incorrect. The current goals of automation should be to:

1) Increase efficiency in repetitive, mundane tasks.
2) Reduce the possibility of errors.
3) Increase the amount of information available in order to make intelligent decisions.

Notice that these goals are both achievable and do not reduce labor. They aim to move human labor to areas where the business will gain more value from that labor.
Toyota, the company mimicked all over the world for its Lean principles, has continued a strategy of putting humans first. In a continuous improvement culture, there must be humans on the production floor.
We want to place humans where their intellect can help improve the process. We also want to give them the relevant information to streamline and optimize their process. We believe that the ideal manufacturing environment will be a combination of machine, software, and human intelligence. Some examples will be discussed below.

Example 1: Error proofing
In 1711, Alexander Pope famously wrote “To err is human”. It is over 300 years later, and humans are still making mistakes. Manufacturing veterans understand this fact and know that humans are eventually prone to error. We choose to accept this truth, and work around our limitations. Rather than expecting our people to never make mistakes, we design the process so that our human errors are no longer possible.
This can be done in many ways. Depending on the problem, we can do this in the PLC, in the MES, or in a “middle-ware” solution that integrates the systems. Recipe management provides an easily illustrated example. Let’s say your process is making a cake. Are you adding the proper input materials? What is stopping your operators from adding chocolate instead of vanilla? Are the eggs expired?
Software can check against your inventory, quality and production systems and prevent mistakes before they occur. Eliminating these mistakes will make people’s jobs simpler and smoother.

Example 2: Data Automation
Think of data automation as easing the flow of data throughout your systems. Wherever possible, use machine and process data rather than relying on humans to enter the correct data at the correct time.
One example is downtime data. When your process line is down, you want to get accurate information about why it is not making product. It is much better to rely on machine data for this reasoning, so that you can accurately make decisions on how to improve your process. If you rely solely on human operators for this process, it is likely that your downtime chart will display the largest amount in the category of “other” or “miscellaneous”. This information is not useful to improve your process.
To truly understand your process, you need high quality data — preferably directly from the source. Human memory is not accurate enough for this task. High quality data helps you to utilize your human assets to improve the process!
The people will still be needed, and still be valuable. Once you have automated the data reporting, you can task your skilled people with using the data to further enrich your factory.

Example 3: Condition Monitoring
Despite advanced manufacturing techniques around the world, many facilities still run their equipment until failure. This disrupts the factory — it must now scramble to repair the equipment (hopefully there are spare parts on hand). The schedule must now be adjusted, and other production orders may be delayed, upsetting customers.
In an ideal manufacturing state, this would never happen. The maintenance team would understand when equipment performance started to shift downward and would work with the scheduling team to properly schedule the maintenance procedure.
The “P to F Interval Curve” helps to understand this concept. By responding at the first sign of equipment trouble, everyone’s job is made easier. However, this practice is difficult without using some automation. In this case, the “automation” is the flow of good data (or information about the condition of assets) through the factory.

Conclusion
Manufacturing personnel should be looking for ways that automation can enhance their jobs. By not embracing the wave of available technology, factories could be risking their business. Consider this fact: Research shows that since 2000, 52 percent of companies in the Fortune 500 have either gone bankrupt, been acquired, or ceased to exist as a result of digital disruption.
The best manufacturing strategy will be the one that effectively blends human inventiveness with machine and computer efficiency. Automation will be used alongside the people to help them in their jobs. The need for individuals in manufacturing will not go away — at least not for a very long time.

September 8, 2016 | Blog

Humans, leave the data processing to computers

Data Science

We are living in a world of rapidly developing computer ability. No doubt, this growth is going to change the role of humans in the work place. While humans and some of our abilities will never be replaced by computers, it makes sense for businesses of all stripes to develop ways in which humans and technology can find a symbiotic relationship. We need to find a scenario in which what we do best as humans can interact and grow alongside what computers do best.

The best place to start is data processing and analysis. While humans are still an intrinsic part of increasing operational efficiency through data analysis, it’s important to point out and accept that computers have the capability to process enormous amounts of data extremely quickly and find correlations and patterns. It’s with the help of these patterns that we, as inspectors of the data computers present us, can make decisions that allow our companies operate more efficiently.

As humans, we need to stick to what we do best. And there are many things that we are firmly better at than computers. Creativity, for one, is a skill that is extremely hard to program into a computer, regardless of how advanced or intelligent it is. Humans also excel at non-structured problem solving and, while it may seem obvious, benefit from our simply being human and our ability to empathize, relate to others, and express emotions. It’s these things that, with the help of computers processing and analyzing data, can allow for better informed choices, higher efficiency, and overall growth. But this is only possible if we evolve with computers not fight against them.

Before the advent of powerful technology, a lot of business decisions were made based on fairly qualitative and often unreliable information. Managers might use a gut feeling or a hunch to decide whether a machine needs a part changed, for example. Of course, there are stories of this working but the majority of cases show that using huge amounts of data and computers to process it hugely reduces the unknown and allows managers to make better, more informed decisions. The speed and accuracy of computers eliminates the risk of human error and the slowness to act or pivot depending on our quickly changing environment.

As a professional or businessman in 2016, what’s paramount is developing a way in which we humans can leverage our skills with the abilities of computers. It’s that combination that will bring true innovation and put a company firmly in the 21st century. As we said above, deferring to the analysis and capacity of data and computer analysis is one way to do this, one way that can help you make informed decisions regarding nearly every aspect of your business.

From monitoring and predicting when a machine asset might fail to refining your supply chain, letting computers process data and using the results will be an invaluable resource that will help you save money, improve efficiency, and remain competitive in the technology era. And what’s more, as computers get even more advanced and take over part of the decision making process, you will be a step ahead of the curve with the knowledge and understanding of how to use data analysis to improve your business. It will be deep-seated in you, your team, and your company and will enable an easy transition to new technologies that will streamline decision making, better inform your actions, and send you forth as a company of the future.

September 5, 2016 | Blog

Generative Models

Data Science

Modern industrial equipment is being outfitted with an ever larger number of sensors. This means that gathering performance measurements is easier than ever before and with this high frequency of data acquisition, we are dealing with problems of really, really big data. In theory, that should make the task of anomaly detection easier, but it doesn’t. Why might this be the case?

There are actually several reasons. First, and this is undoubtedly a good thing, most of the time equipment works. From a data science standpoint, this means the cases we are interested in predicting (those in which equipment fails) are rare. This creates an issue with casting the problem as prediction ­ the anomalous cases are severely underrepresented in the data with a single recorded instance of an incident, devising a sensible validation strategy for a model becoming extremely cumbersome. Second, mechanical equipment failures occur for a variety of reasons (machine characteristics, weather conditions, human error, just to name a few). This means we cannot treat all incidents as being similar, which further compounds the difficulty in applying a supervised learning apparatus.

In practical terms, there is also a third issue: labeled data is often difficult to obtain. The level of data maturity varies wildly among companies interested in predictive maintenance and clean, labeled incident data (where for each measurement point we know whether it is normal or abnormal) is difficult to come by.

Ultimately, we want to be able to detect anomalies in the data without explicitly defining what an anomaly is. So how can we go about this?

A rather direct way is to reverse the problem. Start simply by learning what normal data looks like. This allows us to relax the limitation of a typical prediction problem and use weakly labeled data (a semi­supervised approach). For this, we do not need to know a label for every single data point, we only need to identify the period when system behavior was deemed to be within acceptable bounds. This period is used to learn a normality criterion which is then applied to the remaining part of the data, allowing us to discriminate against the anomalous from the normal, generate warning signals by thresholding, and extract other types of useful information.

figure_1

How do we summarise the information about the normal behaviour of a multivariate time series which consists of measurements describing different aspects of the equipment of interest? From probability theory we know that if we can construct a joint cumulative distribution function of a multivariate series, we can extract all the necessary characteristics, particularly the probabilities associated with the likelihood of different patterns observed in the data. Such a distribution can be used to detect rare events: test instances falling within a low density region, which can be considered outliers. This does not automatically mean that they are indicative of failure as after all, even one in a million events are supposed to happen every so often, but they certainly qualify for further inspection. In particular, the probabilities can act as a warning signal – if the observed probability is shrinking prior to a critical event (less and less likely patterns are manifesting), this can serve as a useful warning signal.

Historically, a typical approach to this problem was to fit a parametric multivariate distribution (usually Gaussian) and use it to calculate the pattern probabilities. There are, however, issues with applying this technique at scale:

  • Empirical data is often asymmetric and possesses fat tails. Such characteristics cannot be captured by a Gaussian distribution. To a certain degree, this problem can be mitigated by using copulas (decoupling joint from marginal behaviour), but it still requires that parametric assumptions be made.
  • By construction, fitting a multivariate Gaussian distribution requires estimating a correlation matrix parametrising the distribution. In the case of “wide” data (a large number of columns, which is frequently the case in a multi­sensor environment), problems can arise with numerical stability due to the fact the correlation matrix is ill defined.
  • in addition, multivariate Gaussian has a property of asymptotic independence in the tails – in plain English, it means that extreme realizations occur independently. Extreme realizations happen together quite frequently in mechanical systems, but under the assumption of gaussianity, those phenomena are practically independent. This can lead to an excessive proportion of false positives in the signals generated by the system.

Fortunately, we can still apply a generative approach (in the sense of focusing on distributional properties of the time series of interest) if we combine it with dimensionality reduction techniques. If our features of interest are continuous (or we can reasonably approximate them as such), it is not too much of a stretch to assume that the joint distribution belongs to an elliptical class and therefore a decomposition based on principal components analysis can be applied in a meaningful way.

An elegant example of that approach is a PCA scorer proposed by Shyu et al (2003). In order to detect anomalous observations, we begin by estimating the principal components on a period considered normal, project the original variables to the PC space and then reconstruct the original variables (perform an inverse transformation). If only the first few principal components (the ones that explain most of the variance in the data) are sufficient for a proper reconstruction, the associated reconstruction error will spike for anomalous examples – leading to a directly usable anomaly score that can be assigned to new, unseen observations.

The graph below demonstrates the application of this method to a problem Semiotic Labs recently solved for a client: we were presented with data collected for two motors forming a single engine unit. The client knew that December 2015 was a period of normal operations in both motors and wanted our opinion on the motor performance from January 2016 onward. We trained a PCA scorer on the good period for motor 1 and evaluated the 2016 part of the data for both motors.

As the graph shows, the reconstruction error (our anomaly score) is consistently low for motor 2, but it spikes in the first week of February 2016 for motor 1. Further examination of the internal performance data (based on measurements performed periodically with more specialized equipment) on the client side led to a confirmation of our discovery – there were indeed mechanical issues with motor 1, that have not been fully acted upon.

August 31, 2016 | Blog

Eliminate Unplanned Downtime

Condition-based maintenance, Unplanned downtime

Condition based maintenance: using big data to lower costs, improve efficiency, and reduce maintenance.

Big data is staying true to its name – it’s doing big things. While great data-based strides have been made in medicine, entertainment, and education, we’re still seeing its potential begin to bloom as an industrial tool that can help companies make their operations efficient, systematic, and predictable.

Despite the fact that we’re only pioneers at the frontier of the industrial internet of things, the future of maintenance is clear – we are headed toward a world where the use of enormous amounts of data will allow companies to learn, months or years in advance, which specific machinery assets are in need of service and when they’re likely to fail. And, perhaps most importantly, this is a world in which a company can eliminate the costs incurred to business when a machinery asset unexpectedly fails.

Because we’re only at the onset of these exciting developments, it makes sense to put these technologies into context through an example. Imagine you’re a food producer, perhaps you raise cows for milk. One day the machinery you use to milk your cows breaks down. You call your engineer but she tells you that the machine needs a specific part that can only be brought in tomorrow. Suddenly, you can’t produce the milk that you are contractually required to provide to the grocery chain in town. In all likelihood, that grocery chain and you have a strict contract in which it is stipulated that if you don’t provide milk at a certain time and of a certain quantity, you pay a fine. All of this could be avoided if only you knew that your milking machinery would break down. Had you known, you could have scheduled the maintenance, brought in that part months ago, and completed the servicing so that it wouldn’t interfere with your operation.

This example is a simple one. For most companies, a breakdown somewhere along their supply chain can mean thousands of euros of costs per hour and unhappy partners and clients up and down the chain. Smart Condition Monitoring based on sensors and artificial intelligence eliminates that risk.  Through the collection and analysis of mass amounts of your data, an online monitoring system continually determines the health of an asset and predicts its remaining useful lifetime. It is a building block of Condition Based Maintenance (CBM), a regime that allows for scheduled maintenance so that no surprise breakdowns occur and no unnecessary costs are incurred.

Condition Based Maintenance, as built off of such data analysis, allows an asset owner to maintain machinery in a way that’s only just become possible. When a company knows when its machinery will fail and in what way, it allows for a vast increase in efficiency overall. Eliminated is the risk that on any given day a motor might fail. While the use CBM doesn’t mean that your machinery will never fail, it does mean that you will know exactly when an asset will and gives the luxury of planning out, months in advance, when to service assets so that they never fail unexpectedly, improving your overall efficiency as a company.

It’s also much cheaper and more efficient to make small fixes to a machine in advance compared to fixing a big failure at the last minute. CBM enables asset owners to make regular maintenance of targeted problems based on real knowledge a reality. In other words, when you maintain your equipment, big expensive machinery failures are much less likely to occur. CBM allows asset owners to do just that. It predicts those small things that, if over time with diligent maintenance are serviced, hugely reduce the risk of major, expensive failures. CBM enables informed maintenance based on real data. This can mean a huge reduction in overall costs of machine upkeep and, of course, the elimination of failures.

While Smart Condition Monitoring and CBM are still a moving, growing area of technology, it is a strong, exciting path toward the future. As more companies collect bigger amounts of data, analyses will be even better at predicting how machinery will act, only improving efficiency and reducing costs more. Regardless of how this field grows, it’s undeniable that we are on the cusp of a fundamental change in industry, a development that will alter how companies use machinery and how machinery is maintained.