Most complex facilities such as refineries and chemical plants utilize some type of Linear Programming (LP) software to optimize and help plan for feedstocks, unit conditions, and even future investments. These models generally optimize a system of linear equations against a set of constraints given an objective function (usually profitability).

Specialized LP engineers are needed to maintain and update the linear (and in some cases non-linear) models that are used by these solvers. Typically, the end user is less experienced with LP models and is more engaged in analysis of model outputs and translating them into actionable plans. In order to deliver the most value to the organization, both must be aware of issues that reduce the fidelity of the model. I was technically one of the latter (a lowly LP user) but did dabble in the model and sought to understand how it worked, as I was tasked many times with deeply explaining why a feedstock was or was not attractive to run.

As one of my former colleagues would say, “All models are wrong. Some models are useful.” Which I believe was him quoting someone else, but I digress…it’s a good quote either way to put things in perspective.

Planning models are all about how we use them and knowing their limitations and pitfalls. LPs will always tell you they have reached an optimal solution (if they converge), but it may be a local optimum; more insidiously (and probably more commonly) some other user has buried some kind of one-off limit such as a unit capacity, quality, or supply capacity deep in the case files or even integrated them into the model. The one-time overrides can quickly create a web of unintended limits that can lead to strange optimum solutions. Recognizing that this is the case is difficult – and other than happening to find these one day when investigating some strange behavior – the best method of preventing this by having organized models and case files and periodically reviewing them.

Another issue that arises is the regime that the process unit models were tuned to – these are usually centered over some small range and linearized to allow the economics in that operating regime to be more easily modeled and optimized. This all works fine unless the unit exhibits an optimum or severe non-linearity with respect to feed qualities and operating conditions. Notably, we see these in conversion units like Fluid Catalytic Crackers (FCC), Hydrocrackers, Reformers, and chemicals units like steam cracking furnaces. There are well-documented methods and technologies to work around these and include some (or all) of the non-linearity, but it will usually come with a cost of model stability, accuracy, or runtime, which begins to degrade the usefulness of the model for its original purpose. Even in the case where these non-linear approaches are implemented the LP engineer must usually choose how to bias the model to better match plant data as model mismatch is identified (again, “All models are wrong.”). In this case, its best to be aware of the limits of the model and what types of problems it was designed to solve – for instance which crude or feed to buy, and not necessarily exactly what conversion to run.

Yet another issue plaguing LP models is inaccurate quality data on feeds to the plant. Crude assays are notoriously wrong the minute they are completed (due to contamination in the supply chain and field decline). Naphtha and gas oil data are usually outdated and represent a few marker qualities that existed at one time in the past. To get an accurate assessment the actual qualities provided by the trader need to be updated. Even within a refinery model, naphtha qualities can be significantly off when it comes to reforming yields and benzene. This problem is more difficult to solve as it goes all the way back to the crude assay and how the fractionation is represented in the model. Beware feed qualities when getting granular with the model output.

There is an inherent conflict or overlap between the LP space and the Real Time Optimization space as the two may arrive at different solutions for the same time period. Inherently, offline kinetic models and LPs cannot have an accurate feed quality 24/7 in some operating environments, and may not even match the process well in certain operating regimes.

Today at Imubit, I lead our Economic Engineering Team, which analyzes optimization opportunities for our clients and analyzes how they are operating. We experience our deep learning solution identifying these different operating scenarios without feedstock information in real-time leading to a better solution which can be implemented minute to minute. Seeing our technology work in this dynamic operating environment has really put into perspective how static and approximative an LP planning model really is.

See for more details.

Share This