Nastaran OkatiCorrespondence to: nastaran@mpi-sws.orgMax Planck Institute for Software SystemsSergio Hernan Garrido MejiaMax Planck Institute for Intelligent SystemsWilliam Roy OrchardUniversity of Cambridge Patrick BlöbaumAmazon ResearchDominik JanzingAmazon Research
Abstract
Recent work conceptualized root cause analysis (RCA) of anomalies via quantitative contribution analysis using causal counterfactuals in structural causal models (SCMs).The framework comes with three practical challenges:(1) it requires the causal directed acyclic graph (DAG), together with an SCM,(2) it is statistically ill-posed since it probes regression models in regions of low probability density,(3) it relies on Shapley values which are computationally expensive to find.
In this paper, we propose simplified, efficient methods of root cause analysis when the task is to identify a unique root cause instead of quantitative contribution analysis.Our proposed methods run in linear order of SCM nodes and they require only the causal DAG without counterfactuals.Furthermore, for those use cases where the causal DAG is unknown, we justify the heuristic of identifying root causes as the variables with the highest anomaly score.
1 Introduction
Detecting anomalies in a large number of variables has become a huge effortin science and business application, ranging from meteorology[1], monitoring medical health[2, 3], monitoring of industrial fabrication[4], fraud detection[5], credit scoring[6], cloud applications[7, 8, 9, 10, 11, 12] and more[13, 14, 15, 16, 17].Oftentimes it is not only important to detect the anomalies but also to answer the question of why the anomaly was observed, or in other words, find its root cause. This is important as it provides insight on how to mitigate the issue manifested in the anomaly, and is hence more actionable.A series of previous works use statistical correlations between features to explain the anomalous value in the target variable[18, 19, 20, 21, 22].The major drawback with such approaches is that correlations between the anomalous variables and the target variable do not necessarily imply causation. Furthermore, such correlation-based approaches neglect the fact that very small perturbations in some variables can cause extreme anomalies in others.Hence, recent works conceptualized RCA using a causal framework[23, 10, 24, 25, 26].From a causal perspective, finding causal relations between anomalies that are simultaneously or consecutively observed in different variablesis an important step in identifying the root cause, i.e., answering the question of which anomaly caused the other.
Related workWe can categorize the related work into two groups: 1) those that require the causal DAG to find the root cause, and, 2) those which do not require the causal DAG or try to infer it from data.Considering the first lines of work, our work builds on[23] interms of the definition of root cause, where the authors use a causal graphical model based framework and utilize Shapley values to quantify the contribution of each of the variables to the extremeness of the target variable.[27, 28, 9]use a traversal-based approach that identifies a node as a root causeunder two conditions: 1) noneof its parent nodes exhibitsanomalous behavior and 2) it is linked to the anomalous target node through a path of anomalous nodes.[29]apply such a traversal algorithm to data from cloud computing to find the root cause of performance drops in dependent services[12, 30, 31].The main idea behind the second line of work is that if the DAG is unknown, the outliers themselves can support causal discovery. In a scenario where several data points are available from the anomalous state(which we do not assume here), finding the root cause can also be modeled as the problem of identifying the intervention target of a soft intervention[32].Likewise,[33, 34] describe how to use heavy-tailed distributions to infer the causal DAG for linear models. In a sense, this also amounts to using outliers for causal discovery – if one counts points in the tail as outliers.
Our contributionsThis paper tries to mitigate the practical challenges of RCAunder the assumption thatthe task is restricted to identifying a unique root cause whose contribution dominates contributions of the other nodes, rather than quantitative contribution analysis.To this end, we relax the assumptions on the available causal information step by step: In Section3, we first explain that interventional probabilities are sufficient and we do not need torely on structural equation based counterfactuals111rung 2 vs rung 3 in the ladder of causation[35]). This finding enables us to design the Topologically Ordered Contribution Analysis (TOCA) algorithm.In Section4 we discuss how to avoid the statistical problem of estimating interventional probabilities and propose SMOOTH TRAVERSAL, an algorithm that inputs only the causal graph, and SCORE ORDERING, an algorithm that is only based on the ordering of the anomaly scores and hence, does not even need the causal graph.All our methods avoid a combinatorial explosion of terms inShapley value based RCA.Section5 compares the approaches on simulated and real-world data.
2 Formal Framework for Root Cause Analysis
We first sketch the causal framework that our method and analysis are based on[36, 35].Consider a system of random variables and an underlying DAG corresponding to the causal relations between them in a way that a directed edge from to indicates that directly causes . Furthermore, we are given an SCM with structural equations
(1)
which indicates that each variable is a function of its parents together with a noise variable , where are jointly independent.We always assume to be the sink note and the target of interest, whose anomaly is supposed to be explained via root causes further upstream.An iterative use of Eq.1 results in a representation
(2)
in which the structural causal information is implicit in the function .
Consider the case where the value of the target variable is flagged as an outlier, and we wish to find the unique root cause of this outlier among the variables .By Eq.2 we have that
where denote the corresponding values of the noise variables. This representation allows us to consider the contribution of each variable to the outlier value in in the target variable as the contribution of its noise term222Throughout the paper, we assume the SCM is invertible and one can recover such noise values from the observed value of the variable together with that of its parents[37].[23, 38]333Note that in[38] no algorithm for root cause analysis is proposed..The idea of attributing anomalies to noise terms is justified by the perspective that each switches between deterministic mechanisms , “response functions” [39],and the goal of RCA is to identify the “corrupted mechanisms” which did not work as usual, i.e., the values that do not follow the SCM with a value “in a normal range”.In other words, the recursive formulation allows us to attribute the extremeness in to a corrupted noise term at one of the upstream nodes. As a result, the framework is based on the intuition that if is the unique root cause, then replacing with a normal value would change to a non-outlier, with high probability.
2.1 Anomaly Score
We next define an outlier score which we use in the next sections for our method and analysis.Let be an appropriate feature map. This feature map can be any existing outlier score function mapping elements of to real values, for example, z-score444The z-score of a sample is where is the population mean, and is the population standard deviation..Further, define the event
(3)
of being more extreme than the observed event according to the feature function .From the point of view of statistical testing, can be seen as test statistics for the null hypothesis that is drawn from , i.e., , and is the corresponding p-value.Note that small indicates that the null hypothesis is rejected with high confidence555This interpretation is certainly invalid in a standard anomaly detection scenario where Eq.3 is repeatedly computed for every observation in a sequence. However, so-called anytime p-values [40] are outside of our scope. and we rather assume that the usual mechanism generating samples from has been corrupted for that specific statistical unit.Sincesmall p-values correspond to strong anomalies,we define the anomaly score by
(4)
Since the logarithm of the probability of an event measures its ‘surprise’ in information theory [13, 15], as in[23], we call anomaly scores with the above calibration ‘IT scores’ for information theoretic. Given observations from , wewill often usethe following simple estimator
(5)
It is important to note that this value can be at most . As a result, for the estimate , at least data samples must have been used.
2.2 Contribution Analysis
To compute the contribution of each noise to the anomaly score of the outlier , [23] measurehowreplacingthe observed value (originating from a potentially corrupted mechanism) with a normal value, changes the likelihood of the outlier event. Intuitively, this shows us the extent to which has been responsible for the extremeness of . This change of probability, however, should be measured given different contexts, with each context being the set of variables that are already changed to a normal value.To formalize this, for any subset of the index set, , we first define the probability of the outlier event when all nodes in are randomized according to their joint distribution, and only those in are kept fixed, i.e.,
(6)
Now the contribution of a node given the context is defined as:
(7)
Let be the set of all possible permutations of the nodes and be any permutation. One then defines the contribution of a node given permutation as
(8)
where denotes the set of nodes that appear after with respect to , i.e., .One can easily see that for each permutation , decomposes into the contributions of each node:
To symmetrize over all the possible permutations in , the Shapley contribution is calculated as:
(9)
which is certainly expensive to compute[23].Here, however, motivated by the fact that in most real-world scenarios there is a single corrupted factor causing anomalies,we try to find the unique node with a corrupted mechanism, or the unique root cause of the outlier . As our first result, we show that in this scenario, it is unlikely that nodes other than get a significant value for their contribution. Note that this directly allows us to identify the root cause without the need for quantitatively finding the contribution of each and every node in the outlier value of the target variable , which saves us exponential computations. In addition to the computational load, the symmetrization suffers from a more fundamental issue, namely that for most permutations , the contributions rely on structural equations (rung 3 causal information according to the ladder of causation in [35]), while some rely only on the graphical model (rung 2), which we explain in the next sub-section.
2.3 Interventional vs Counterfactual RCA
Since the SCM cannot be inferred from observational and interventional data,relying on the SCM is a serious bottleneck for RCA. Fortunately,no term in Eq.9 requires the SCM.We explain this through an example for the bivariate causal relation :given the SCM ) and , randomizing and fixing generates according to the observational conditional (which no longer relies on the SCM), while, randomizing and adjusting cannot be resolved into any observational term (see also Section 5 in[41]).666This can be checked by an SCM with two binaries where , with unbiased , which induces the same distribution as the SCM , where and are disconnected and thus does not get any contribution.The following result generalizes this insight:
Proposition 2.1
Whenever is a topological order of the causal DAG, i.e., there are no arrows for , allcontributions can be computed from observational conditional distributions.
All proofs can be found in AppendixA. While [23] introduced Shapley contribution in Eq.9 for a quantitative contribution analysis, we are interested in finding the unique root cause whenever it exists. In other words, we are not interested in ‘fair’ quantitative attribution analysis since we assume that one node is dominating anyway, regardless of which order we choose.
While we can never exclude the case where several mechanisms are corrupted, we always start with the working hypothesis that there has been only one. If this does not work, we can still try out hypotheses with more than one, but with the strong inductive bias of preferring explanations where most mechanisms worked as expected, in agreement with the so-called ‘sparse mechanism shift hypothesis’ in[42, 43].
To formalizethe assumption of a unique root cause,we firstgeneralize the notion of the contribution of a node tothe contribution of a set of nodes given a context , i.e.,
(10)
This notion becomes ratherintuitive after observingthat it is given by the sum ofthe contributions of all the elements in when they are one by one added to the context :
Lemma 3.1
For any set , it holds that with .
Next, the following result shows that it is unlikely to obtain high contributions when the noise values are randomly drawn from their usual distributions, i.e., we have the following proposition:
Proposition 3.2
For any set of observed noise values it holds that .
The way Proposition3.2 is phrased, it assumes that all noise variables follow their usual distributions . This is actually not the case we are interested in. When we observe an anomaly we assume that at least one mechanism is corrupted for theobservation of interest and thus at least for one , the value has been drawn from a different distribution . However, one can easily seethat the proposition still holds when all not in are drawn from a different distribution. We therefore read Proposition3.2 as saying thatit is unlikely that non-corrupted nodes have high contributions.
To find the root cause , we will describe an algorithm that replaces step by step each with random values and estimatehow this affects the probability of the event as in Eq.3, which we call the outlier event, for short777This is actually oversimplified language because a value is certainly still an outlier when is slightly below .We observe that for any set , the probability of obtaining an outlier when all noise values in are randomized can be rewritten in terms of contributions of sets:
Lemma 3.3 (outlier probabilities from contributions of sets)
The probability for the outlier event when all noise variables in are randomized can be written in terms of contributions of or , i.e.,
(11)
We now consider contributions with respect to a given permutation, , of the index set . To this end, remember from the previous section that denotes the set of nodes which appear before with respect to the permutation , i.e., and likewise for superscripts , , etc. We first define the dominating root cause with respect to a permutation .
Definition 3.4 (dominating root cause with respect to )
Node is said to be the dominatingroot cause with respect to the permutation if we havefor all
and further, forall , we have
The definition is motivated by the assumption that sets that do not contain have no significant contribution. However, demanding that this holds for all subsets would be far too strong since Proposition3.2 cannot hold uniformly over all of them. Therefore we only demand it for two dedicated subsets ( and ) which are sufficient for our analysis.
First we observe that for all, together with Lemma 3.3, condition (i) of Def.3.4 implies
(12)
Further, for all Lemma3.3 and condition (ii) of Def.3.4 imply
(13)
In other words, if we increase the set of nodes for which the actual noise values are replaced with random values node by node, starting from the last one in the ordering , thenjumps by at least when we reach the root cause .On the other hand, jumps of log probabilitiesare smaller than at every other node.To tell which of the bounds Eq.13and Eq.12,are validwe need to estimate the probabilities up to an error of
(14)
The inequality assumes , otherwise should not be considered an outlier.Next we investigate how many samples we need to use to estimate the probabilities up to error level :
Lemma 3.5 (Hoeffding bound)
To have we need to use at least number of data samples.
As an example, for and using nodes, we would need number of data sample.If has been estimated via Eq.5, it is based on at least samples.
For practical applications, we proposeAlgorithm1 (TOCA, Topologically Ordered Contribution Analysis) which simply infers the node for which the estimated log probabilities show the largest jump as the root cause. In what comes next, we discuss different properties of TOCA.
Conceptual difference to traversal algorithm:When we have identified a node for which the log probabilities of jump when including in the conditioning set, we can be certain that is a root cause in the sense of having high contribution.This is not the case for the traversal algorithm(as used by[27, 28, 9]), which infers the most upstream anomaly as root cause:Let, for instance,
where and are standard Gaussians .When and ,traversal identifies as the root cause (because both values and are anomalous), although a more typical value of would have made the target outlier even stronger. Hence, has even a negative contribution in the above sense.
Why do we measure gaps of log probabilities?From the perspective of statistical estimation, it would certainly be easier to seek for the index with the largest increase of probabilityinstead of the largest increase of logarithmic probability of . Why don’t we work with a definition of contribution that is defined by additive increase rather than multiplicative increase?The following example shows that high multiplicative increase can only be caused by rare events, while high additive increase can happen quite frequently: Consider the boolean causal relation between 3 binaries with while .Then we have , while .Hence, increases the probability of by , although this value occurs quite frequently. In contrast, a large multiplicative increase is only achieved by the rare event (in agreement with Proposition3.2).The conclusion that large contribution comes from a corrupted mechanism (‘the root cause’) is only justified because high contributions are unlikely otherwise.
Why do we choose a topological order?So far, we have not made use of the fact that is a topological ordering of the causal DAG. We now employProposition2.1and note that we can replace all our probabilities with observational probabilities:
(15)
In practice, the easiest way to sample from the conditional distribution would still be to just sample the noise values.This may come as a surprise given that we have advertised not relying on the structural equation as an advantage of the topological ordering.However, the crucial point is that we have nevertheless circumvented the problem of SCMs being ambiguous: We can choose any SCMthat generates the observed conditional distribution, regardless of whether it properly describes the counterfactuals.This is because the obtained contributions only depend on the observational conditional probabilities due toEq.15.
1:Input: outlier , the FCM between the variables , a topological ordering of nodes
2:Initialize:
3:fordo
4:
5:
6:fordo
7:
8:
9:endfor
10:endfor
11:return
4 Causal relations between outliers via calibrated anomaly scores
We have discussed methods that do not require SCMs and require only the graphical model, that is, the causal DAG endowed with observational conditional probabilities . However, inferring the latter is statistically ill-posed, particularly when the number of parents is large. Whilethe additive noise assumption reduces the problem essentially to the regression problem of estimating conditional expectations, the problem remains ill-posed becauseanalyzing outliers amounts to probing the regression models in regions with low density.Here we describe approaches that infer causal relations between anomaliesfrom a given causal DAG together with marginal anomaly scores , using the simple estimator Eq.5. To this end, we start with the toy example of cause-effect pairs and then generalize it to more than two variables.
4.1 Outliers of cause-effect pairs
Let the causal DAG be for two variables , where we observe anomalies with anomaly scores that occur together.We argue now that, the anomaly is implausible to be the unique root cause of the anomaly if .This can be proven subject to the following assumptions:
1.
Monotonicity: increasing the anomaly score of does not decreasethe probability of an anomaly event at :
(16)
2.
Injectivity: the mapping is one-to-one. This is the case, for instance, for scores obtained from one-sided tail probabilities, that is for .
We then define two different null hypotheses stating that the mechanisms or , respectively, worked as usual: : has been drawn from , and : has been drawn from (where denotes the joint distribution of in the normal mode).Note that allows that is drawn from an arbitrary distribution instead of , only the mechanism generating from has remained the same. We then have the following criteria for rejecting and :
Lemma 4.1 (rejecting )
can be rejected at level .
This follows immediately from the definition of IT scores in Eq.4.On the other hand, we obtain:
Lemma 4.2 (rejecting )
Subject to Monotonicity and Injectivity assumptions, can be rejected at level .
Lemma4.1 and Lemma4.2 entail interesting consequences for the case where the causal direction is not known: Let . For we can only reject , while mechanism possibly worked as expected. However, for , we would reject both hypotheses,that and that worked as expected with p-level each. Following the working hypothesis that at most one mechanism was corrupted we thus prefer .
Small sample versionWhen the anomaly scores areestimated froma small number of observations according to Eq.5,comparison of scores canstill be insightful:if, thenfor a large fraction ofobservations for which the statement does not hold. Hence, the event is not a sufficient cause [44] forthe event .In this sense,we can still exclude the value with a smaller score as a candidate for the root cause,if we are interested in sufficient causes.
4.2 Outliers in DAGs with multiple nodes
To inferwhich node in a DAG with nodes has been corrupted,we start with thehypothesis that all nodes except worked as expected:: for the anomaly event all with have been drawn from.To define a test statistics,we first introduce theconditional outlier scoresfrom [45]:
where is some feature function as before (we actually allow variable-dependent feature functions , but they need to coincide between conditional and unconditional outlier scores).With variable input , they definerandom variables .Further, we add the following assumption to Monotonicity and Injectivity from Subsection4.1:
3.
Continuity:all variables are continuous with density w.r.t. the Lebesgue measure, and alsoall conditional distributions have such a density.
This condition ensures that all the conditional outlier scores are distributed according to the density , because then the cumulative distribution function is uniformly distributed.It entails a property that will be convenient for testing :
Lemma 4.3 (independence of conditional outlier scores)
Subject to the continuity assumption, are independent random variables.
Independence of conditional scoresenables the definition of a simple test statistics that is obtained by summing over the individual ones888Note that this resembles known methods of aggregating p-values, but on a logarithmic scale [46]. together with a correction term. With the cumulative score: , we have that:
(17)
To understand Eq.17,note that the second term is needed to ‘recalibrate’ since the sumof independent IT scores is no longer an IT score, but with the extra term, it is. Intuitively, one may think of the extra term as a multi-hypothesis testing correction:the sum of independent IT scores is likely to be large because it is not unlikely that the set contains one large term. The following result states that this is quantitatively the right correction:
Lemma 4.4 (distribution of cumulated score)
If holds, Eq.17is distributed according tothe density .
As a direct result of the above lemmas we have:
Theorem 4.5 (p value for from conditional outlier scores)
can be rejectedfor the observationwith p-value
The theorem justifies choosing the index with the maximal conditional outlier score as the root cause, whenever we follow the working hypothesis that only one mechanism is corrupted.
We started this subsection with the goalof avoiding the estimation of conditional probabilities.Therefore, we will now replace conditional outlier scores with bounds derived from marginal anomaly scores, following the ideas from the bivariate case.To this end, we replace the monotonicity assumption from there with two stronger ones. We introduce the ‘outlier events’ and postulate:
1a.
Monotonicity:
1b.
Non-negative dependence:For any nodes :
where denote the parents of. Both conditions are close in spirit because they assume mutual positive influence of outlier events. While 1a is explicitly causal, 1b is actually a purely statisticalcondition, but also justified by an implicit causal model in whichoutliers at one point of the system render outliers at other points more likely and not less.We then obtain the following bound on the joint statistics :
Theorem 4.6 (p value from increment of scores)
With the cumulative anomaly score increments:, it holds that if conditions 1a and 1b hold, can be rejected with
4.3 Finding root causes by minimizing score differences
Theorem4.6 has interesting consequences for the scenario where variables form the causal chain
(18)
In this case, reduces tothe sum of score differences for adjacent nodes.Accordingly, can then be rejectedat p-value:
(19)
We conclude that needs to be rejectedwhenever the anomaly score increases significantly along the chain of anomalies at any .This justifies inferring the index as the unique root cause that maximizes thescore difference (with the difference being for the first node with ‘an imaginary non-anomalous node’ )becausethis yields the weakest bound as in Eq.19. Motivated by these bounds, we propose the algorithm SMOOTH TRAVERSAL whichselects the node as the root cause that shows the strongest increase in anomaly scorecompared to its most anomalous parent. In contrast to the usual Traversal[27, 28, 9], this avoids choosing an arbitrary threshold above which a node is considered anomalous.
4.4 Shortlist of root causes via ordering anomaly scores
We now drop the final assumption and assume that the causal DAG is not known and we are only given the marginal anomaly scores of each node.How can we find the root cause?We now argue that the top-scored anomalies provide a short list of root causes. The idea is, again, that the anomaly is unlikely to cause the downstream anomaly if , unless we allow additional mechanisms (except for ) to be corrupted. To show this, we restrict the attention to a DAG with three nodes, in which we have merged all paths from to to the arrow and all nodes upstream of to a single huge confounder . In analogy tothe proof ofTheorem4.6 andwith the same assumptions, we can bound conditional outlier scorewith the score difference to the sum the parent scores:.Assuming , the hypothesis that the mechanism worked as expected, can be rejected at a p-level of about .This motivates the algorithm SCORE ORDERING which chooses top-scored anomalies as candidates for root causes. We do not provide pseudocode due to its simplicity.
5 Experiments
We provide a comparison of the performance of different approaches on simulated and real-world data. We compare our algorithms TOCA, SMOOTH TRAVERSAL, and SCORE ORDERING to ‘Traversal’[9, 27, 28], ‘Counterfactual’[23], and Circa [25]. We provide a short description of each of the algorithms below:
•
TOCA (ours, refer to Algorithm1) it traverses in the reverse topological ordering of the nodes and identifies the node with the highest jump in estimated log probabilities as the root cause.
•
SMOOTH TRAVERSAL (ours, refer to Section4.3) identifies the node that shows strongest increase of anomaly score compared to its most anomalous parent as the root cause.
•
SCORE ORDERING (ours, refer to Section4.4) takes only the node with the highest anomaly score as the root cause. It is the only approach that does not require the causal graph or SCM and uses only the estimated anomaly scores.
•
‘Traversal’[9, 27, 28] identifies a node as a root cause under the two conditions: 1) none of its parents exhibit anomalous behaviour, and 2) it is linked to the anomalous target node through a path of anomalous nodes.
•
Circa[25] fits a linear Gaussian SCM to the data in the non-anomalous regime and compares the predicted value of each variable, given its parents (hence, it uses the causal graph), to the observed value in the anomalous regime. The node whose value differs the most is identified as the root cause.
•
Counterfactual[23] finds the Shapley contribution of each node to the outlierness in the target through full counterfactual analysis and outputs the node with the highest Shapley value as the root cause.
We assume that only a single data point is available in the anomalous regime.We, therefore, do not compare to approaches that involve learning the causal graph (e.g., Root Cuase Discovery [31]), nor do we compare to approaches such as -Diagnosis [11] which perform two-sample tests to identify which nodes have changed significantly from the non-anomalous to the anomalous regimes.
We generate random SCM with 50 nodes with linear and non-linear structural equations (more details on the generation process in AppendixB) and sample 1000 observations of the SCM for the non-anomalous regime to train the IT anomaly scorer (as in Eq.5).To produce anomalous data we choose a root cause at random from the list of all nodes and a target node from the set of its descendants (including itself).Second, we sample the noise of all variables and modify the noise of the root cause by adding standard deviations, where , and propagate the noises through the SCM to obtain a realization of each variable.We repeat this process 100 times for each value and consider the algorithm successful if its top-ranking result coincides with the chosen root cause.
Through our experiments, we aim to answer the following questions:
How do different algorithms compare and how does this performance depend on the strength of the anomaly at the root cause?In Fig.1 we look at the performance of different algorithms against the strength (the number of standard deviations that the noise term deviates from its mean) of the anomaly at the root cause. We find that the strongest performing algorithms are SMOOTH TRAVERSAL, Traversal, and Counterfactual, all of which outperform TOCA and SCORE ORDERING, which have comparable performance to each other.Circa performs considerably worse than the other approaches which we suspect is due to the assumption of linearity.The performance of TOCA and SCORE ORDERING improves as the strength of the anomaly increases. This is expected given that both algorithms aim to find the unique anomalous node.
How do the approaches scale with increasing graph sizes (both in performance and run times)?For a fixed amount of injected perturbation to the root cause (3 standard deviations away from its mean), we look at the runtimes of different algorithms for an SCM with 100 nodes generated similarly as described above. As in Fig.2 in AppendixB,Traversal and SMOOTH TRAVERSAL are the fastest, with the remaining approaches having comparable average run times, however, Counterfactual has a long tail, with times for 50 node graphs running into the tens of seconds.
How is performance affected by increasing causal graph density?We generate SCMs with the number of nodes in the set with a fixed amount of injected perturbation to the root cause (3 standard deviations away from its mean). In Fig.3 in AppendixB we see that the performance of SCORE ORDERING and TOCA slightly decreases with the size of the SCM.
When comparing the performance we should however keep in mind that Counterfactual requires the SCM, while Traversal and SMOOTH TRAVERSAL require only the causal graph, this is a clear disadvantage of Counterfactual against the other two algorithms.In addition, SMOOTH TRAVERSAL does not have a free parameter (thresholding scores), which Traversal does.Both SCORE ORDERING and TOCA have similar performance, but SCORE ORDERING does not even require information of the causal graph, whereas TOCA does.Nevertheless, it has the advantage over all others (except Counterfactual), that the inferred gap of log probabilities witnesses a true causal contribution, which is not the case for other methods since upstream anomalies may not be causally responsible for the target anomaly (as argued in Section3).
We have also tried our algorithms on the ’PetShop dataset’ [29], where the task is to infer the root cause of performance drops in a cloud-based application. This presents a challenging task due to high missingness, low sample sizes, and near-constant variables.Furthermore, the causal ground truth is only partially known if one accepts that the inverted call graph (showing which services call which in the application) is a proxy for the causal graph.The results are shown in AppendixB.3, SMOOTH TRAVERSAL and SCORE ORDERING perform well, while TOCA fails, probably because it relies heavily not only on the causal graph but also on the SCM.
6 Conclusions
Without challenging the approach of [23] for a clean definition of root cause and its quantitative contribution, we have exploredseveral directions in which the practical limitationsof the corresponding algorithm provided in DoWhy[47]can be mitigated without sacrificing too much of its theoretical justification: First, by avoiding rung 3 causal models and high computational load, second by avoiding estimation of conditional probabilities, and third by not even relying on the causal DAG.
References
[1]SWibisono, MTAnwar, Aji Supriyanto, and IHA Amin.Multivariate weather anomaly detection using dbscan clusteringalgorithm.In Journal of Physics: Conference Series, volume 1869, page012077. IOP Publishing, 2021.
[2]EricV Strobl and ThomasA Lasko.Identifying patient-specific root causes with the heteroscedasticnoise model.Journal of Computational Science, 72:102099, 2023.
[3]EricV Strobl.Counterfactual formulation of patient-specific root causes ofdisease.Journal of Biomedical Informatics, page 104585, 2024.
[4]GianAntonio Susto, Matteo Terzi, and Alessandro Beghi.Anomaly detection approaches for semiconductor manufacturing.Procedia Manufacturing, 11:2018–2024, 2017.
[5]Jellis Vanhoeyveld, David Martens, and Bruno Peeters.Value-added tax fraud detection with scalable anomaly detectiontechniques.Applied Soft Computing, 86:105895, 2020.
[6]Sanjiv Das, Richard Stanton, and Nancy Wallace.Algorithmic fairness.Annual Review of Financial Economics, 15:565–593, 2023.
[8]Cheng-Ming Lin, Ching Chang, Wei-Yao Wang, Kuang-Da Wang, and Wen-Chih Peng.Root cause analysis in microservice using neural granger causaldiscovery.arXiv preprint arXiv:2402.01140, 2024.
[9]Dewei Liu, Chuan He, Xin Peng, Fan Lin, Chenxi Zhang, Shengfang Gong, Ziang Li,Jiayu Ou, and Zheshun Wu.Microhecl: high-efficient root cause localization in large-scalemicroservice systems.In Proceedings of the 43rd International Conference on SoftwareEngineering: Software Engineering in Practice, ICSE-SEIP ’21, page338–347. IEEE Press, 2021.
[10]Azam Ikram, Sarthak Chakraborty, Subrata Mitra, Shiv Saini, Saurabh Bagchi, andMurat Kocaoglu.Root cause analysis of failures in microservices through causaldiscovery.Advances in Neural Information Processing Systems,35:31158–31170, 2022.
[11]Huasong Shan, Yuan Chen, Haifeng Liu, Yunpeng Zhang, Xiao Xiao, Xiaofeng He,Min Li, and Wei Ding.-diagnosis: Unsupervised and real-time diagnosis ofsmall-window long-tail latency in large-scale microservice platforms.In The World Wide Web Conference, pages 3215–3222, 2019.
[12]Meng Ma, Jingmin Xu, Yuan Wang, Pengfei Chen, Zonghua Zhang, and Ping Wang.Automap: Diagnose your microservice-based web applicationsautomatically.In Proceedings of The Web Conference 2020, pages 246–258,2020.
[13]DouglasM Hawkins.Identification of outliers, volume11.Springer, 1980.
[14]Varun Chandola, Arindam Banerjee, and Vipin Kumar.Anomaly detection: A survey.ACM computing surveys (CSUR), 41(3):1–58, 2009.
[15]CharuC Aggarwal.An introduction to outlier analysis.Springer, 2017.
[16]Ane Blázquez-García, Angel Conde, Usue Mori, and JoseA Lozano.A review on outlier/anomaly detection in time series data.ACM Computing Surveys (CSUR), 54(3):1–33, 2021.
[17]Leman Akoglu.Anomaly mining: Past, present and future.In Proceedings of the 30th ACM International Conference onInformation & Knowledge Management, pages 1–2, 2021.
[18]EdwinM Knorr and RaymondT Ng.Finding intensional knowledge of distance-based outliers.In Vldb, volume99, pages 211–222, 1999.
[19]Barbora Micenková, RaymondT Ng, Xuan-Hong Dang, and Ira Assent.Explaining outliers by subspace separability.In 2013 IEEE 13th international conference on data mining,pages 518–527. IEEE, 2013.
[21]Meghanath Macha and Leman Akoglu.Explaining anomalies in groups with characterizing subspace rules.Data Mining and Knowledge Discovery, 32:1444–1480, 2018.
[22]Nikhil Gupta, Dhivya Eswaran, Neil Shah, Leman Akoglu, and Christos Faloutsos.Beyond outlier detection: Lookout for pictorial explanation.In Machine Learning and Knowledge Discovery in Databases:European Conference, ECML PKDD 2018, Dublin, Ireland, September 10–14, 2018,Proceedings, Part I 18, pages 122–138. Springer, 2019.
[23]Kailash Budhathoki, Lenon Minorics, Patrick Bloebaum, and Dominik Janzing.Causal structure-based root cause analysis of outliers.In Kamalika Chaudhuri, Stefanie Jegelka, LeSong, Csaba Szepesvari,Gang Niu, and Sivan Sabato, editors, Proceedings of the 39thInternational Conference on Machine Learning, volume 162 of Proceedingsof Machine Learning Research, pages 2357–2369. PMLR, 17–23 Jul 2022.
[24]Dongjie Wang, Zhengzhang Chen, Jingchao Ni, Liang Tong, Zheng Wang, Yanjie Fu,and Haifeng Chen.Interdependent causal networks for root cause localization.In Proceedings of the 29th ACM SIGKDD Conference on KnowledgeDiscovery and Data Mining, pages 5051–5060, 2023.
[25]Mingjie Li, Zeyan Li, Kanglin Yin, Xiaohui Nie, Wenchi Zhang, Kaixin Sui, andDan Pei.Causal inference-based root cause analysis for online service systemswith intervention recognition.In Proceedings of the 28th ACM SIGKDD Conference on KnowledgeDiscovery and Data Mining, pages 3230–3240, 2022.
[26]Dongjie Wang, Zhengzhang Chen, Yanjie Fu, Yanchi Liu, and Haifeng Chen.Incremental causal graph learning for online root cause analysis.In Proceedings of the 29th ACM SIGKDD Conference on KnowledgeDiscovery and Data Mining, pages 2269–2278, 2023.
[27]Pengfei Chen, Yong Qi, Pengfei Zheng, and DiHou.Causeinfer: Automatic and distributed performance diagnosis withhierarchical causality graph in large distributed systems.In IEEE INFOCOM 2014 - IEEE Conference on ComputerCommunications, pages 1887–1895, 2014.
[28]JinJin Lin, Pengfei Chen, and Zibin Zheng.Microscope: Pinpoint performance issues with causal graphs inmicro-service environments.In International Conference on Service Oriented Computing,2018.
[29]Michaela Hardt, William Orchard, Patrick Blöbaum, Shiva Kasiviswanathan, andElke Kirschbaum.The petshop dataset – finding causes of performance issues acrossmicroservices, 2023.
[30]YuGan, Mingyu Liang, Sundar Dev, David Lo, and Christina Delimitrou.Sage: practical and scalable ml-driven performance debugging inmicroservices.In Proceedings of the 26th ACM International Conference onArchitectural Support for Programming Languages and Operating Systems,ASPLOS ’21, page 135–151, New York, NY, USA, 2021. Association forComputing Machinery.
[32]Amin Jaber, Murat Kocaoglu, Karthikeyan Shanmugam, and Elias Bareinboim.Causal discovery from soft interventions with unknown targets:Characterization and learning.In H.Larochelle, M.Ranzato, R.Hadsell, M.F. Balcan, and H.Lin,editors, Advances in Neural Information Processing Systems, volume33,pages 9551–9561. Curran Associates, Inc., 2020.
[33]Nicola Gnecco, Nicolai Meinshausen, Jonas Peters, and Sebastian Engelke.Causal discovery in heavy-tailed models.The Annals of Statistics, 49(3):1755 – 1778, 2021.
[34]Carlos Améndola, Benjamin Hollering, Seth Sullivant, and Ngoc Tran.Markov equivalence of max-linear Bayesian networks.In Cassio deCampos and MarloesH. Maathuis, editors, Proceedings of the Thirty-Seventh Conference on Uncertainty in ArtificialIntelligence, volume 161 of Proceedings of Machine Learning Research,pages 1746–1755. PMLR, 27–30 Jul 2021.
[35]J.Pearl and J.Mackenzie.The book of why.Basic Books, USA, 2018.
[36]J.Pearl.Causality.Cambridge University Press, 2000.
[37]Kun Zhang, Zhikun Wang, Jiji Zhang, and Bernhard Schölkopf.On estimation of functional causal models: general results andapplication to the post-nonlinear causal model.ACM Transactions on Intelligent Systems and Technology (TIST),7(2):1–22, 2015.
[38]Julius VonKügelgen, Abdirisak Mohamed, and Sander Beckers.Backtracking counterfactuals.In Mihaela vander Schaar, Cheng Zhang, and Dominik Janzing, editors,Proceedings of the Second Conference on Causal Learning and Reasoning,volume 213 of Proceedings of Machine Learning Research, pages 177–196.PMLR, 11–14 Apr 2023.
[39]A.Balke and J.Pearl.Counterfactual probabilities: Computational methods, bounds, andapplications.In R.Lopez D.Mantaras and D.Poole, editors, Uncertainty inArtifical Intelligence, volume10. Morgan Kaufmann, San Mateo, 1994.
[40]Akash Maharaj, Ritwik Sinha, David Arbour, Ian Waudby-Smith, SimonZ. Liu,Moumita Sinha, Raghavendra Addanki, Aaditya Ramdas, Manas Garg, andViswanathan Swaminathan.Anytime-valid confidence sequences in an enterprise a/b testingplatform.In Companion Proceedings of the ACM Web Conference 2023, WWW’23 Companion, page 396–400, New York, NY, USA, 2023. Association forComputing Machinery.
[41]Dominik Janzing, Patrick Blöbaum, AtalantiA Mastakouri, PhilippM Faller,Lenon Minorics, and Kailash Budhathoki.Quantifying intrinsic causal contributions via structure preservinginterventions.In International Conference on Artificial Intelligence andStatistics, pages 2188–2196. PMLR, 2024.
[42]Yoshua Bengio, Tristan Deleu, Nasim Rahaman, Rosemary Ke, SébastienLachapelle, Olexa Bilaniuk, Anirudh Goyal, and Christopher Pal.A meta-transfer objective for learning to disentangle causalmechanisms.arXiv preprint arXiv:1901.10912, 2019.
[43]B.Schölkopf, F.Locatello*, S.Bauer, N.R. Ke, N.Kalchbrenner, A.Goyal,and Y.Bengio.Toward causal representation learning.Proceedings of the IEEE, 109(5):612–634, 2021.
[44]Judea Pearl.Sufficient causes: On oxygen, matches, and fires.Journal of Causal Inference, 7(2), 2019.
[45]Dominik Janzing, Kailash Budhathoki, Lenon Minorics, and Patrick Blöbaum.Causal structure based root cause analysis of outliers.arxiv:1912.02724, 2019.
[46]R.A. Fisher.Statistical Methods for Research Workers.Oliver and Boyd, 1925.
[47]Patrick Blöbaum, Peter Götz, Kailash Budhathoki, AtalantiA. Mastakouri, andDominik Janzing.Dowhy-gcm: An extension of dowhy for causal inference in graphicalcausal models, 2022.
Appendix A Proofs
A.1 Proof of Proposition2.1
With the event as in Eq.3 and as in Eq.8we have
(20)
(i) and (ii) are seenas follows:
(21)
The first equality in Eq.21 followsfrom and because is a function of .The second one follows because conditioning on all ancestors blocks all backdoor paths. Note that since is a reverse topological ordering of the nodes all are ancestors of .
A.2 Proof of Lemma3.1
A.3 Proof of Proposition3.2
By definition we have that
and we use instead of when it is clear from the context.
Further, is actually a function of and .For fixed , define the set .It can equivalently be described by
We thus have
Hence,
Using
we obtain
A.4 Proof of Lemma3.3
were follows from and follows from the fact that as long as .
A.5 Proof of Lemma3.5
The proof simply follows by using Hoeffding’s bound3.5 together with union bound, if data points are used for estimation of each , we have based on Hoeffding’s bound for each that:
From union bound we have that it simultaneously holds for all with probability at least . To have we should have:
A.6 Proof of Lemma4.2
The first equality uses injectivity, and the second inequality monotonicity (see Subsection4.1. The third one follows from for any two events .
A.7 Proof of Lemma4.3
Assume without loss ofgenerality that is a topological ordering of the DAG. Then holds due to the local Markov condition.Since has density for all , also has density for all . Hence, is independent of .
A.8 Proof of Lemma4.4
The proof follows from a minor modification of the proof of Lemma 1 in [45] by replacing with .
A.9 Proof of Theorem4.5
The theorem is a direct result of Lemma4.3 and Lemma4.4.
A.10 Proof of Theorem4.6
Recalling that anomaly scores are non-negative, we thus obtain:
with the notation .Hence, we obtain a lower bound for :
(22)
Appendix B Experimental details and further experiments
B.1 Experimental details
To generate an SCM for the experiments in Section.5 (see Fig.1), we first uniformly sample between 10 and 20 root nodes (20% to 40% of the total nodes of the graph) and uniformly assign to each either a standard Gaussian, uniform or mixture of Gaussians as its noise distribution.As a second step, we recursively sample non-root nodes.Non-root nodes need not be sink nodes.The number of parent nodes that each non-root node is conditioned on is randomly chosen following a distribution that assigns a lower probability to a higher number of parents.In total, the causal graph is composed of 50 nodes.The parametric forms of the structural equations are randomly assigned to be either a simple feed-forward neural network with a probability of 0.8 (to account for non-linear models) and a linear model.The feed-forward neural network has three layers (input layer, hidden layer, and output layer) where the hidden layer has a number of nodes chosen randomly between 2 and 100.All the parameters of the neural network are sampled from a uniform distribution between -5 and 5.For the linear model, we sample the coefficients of the linear model from a uniform distribution between -1 and 1 and set the intercept to 0.In both cases, we use additive Gaussian noise as the relation between the noise and the variables.
To generate data for the non-anomalous regime we sample the noise of each of the variables and propagate the noise forward using the previously sampled structural equations.As mentioned in the main text, to produce anomalous data we choose a root cause at random from the list of all nodes and a target node from the set of its descendants (including itself).Then we sample the noise of all variables and modify the noise of the root cause by adding standard deviations, where , and propagate the noises through the SCM to obtain a realization of each variable.We repeat this process 100 times for each value added to the standard deviation and consider the algorithm successful if its result coincides with the chosen root cause.
B.2 Further experiments by varying the number of nodes in the graph
We also run experiments by fixing the number of standard deviations added ( in Section.5) to 3 and varying the number of nodes in the SCM.The number of nodes we run the experiments on is .We see in Fig.3 that the performance for Traversal, SMOOTH TRAVERSAL, and Counterfactual does not change much for different graph sizes, whereas the quality of TOCA and SCORE ORDERING decreases slightly for larger graph sizes.Circa, on the other hand, has worse performance for an intermediate number of nodes but the quality increases as the graph gets larger.
Fig.4 shows the runtime of all the algorithms for an SCM with 100 variables (and added standard deviation 3).The only qualitative change between this Figure and Fig.2 is that SCORE ORDERING is slightly faster than TOCA.
B.3 PetShop dataset
[29] introduces a dataset specifically designed for evaluating root cause analyses in microservice-based applications. The dataset includes 68 injected performance issues, which increase latency and reduce availability throughout the system. In addition to the approaches evaluated by the authors, reproduced below, we evaluated our algorithms in both top-1 recall (Table.1) and top-3 recall (Table.2).
Introduction: My name is Terence Hammes MD, I am a inexpensive, energetic, jolly, faithful, cheerful, proud, rich person who loves writing and wants to share my knowledge and understanding with you.
We notice you're using an ad blocker
Without advertising income, we can't keep making this site awesome for you.