1 Introduction
The problem of estimating a directed acyclic graph (DAG) from highdimensional observational data has attracted a lot of attention recently in the statistics and machine learning literature, due to its importance in a number of application areas including molecular biology. In the latter area, high throughput techniques have enabled biomedical researchers to profile biomolecular data to better understand causal molecular mechanisms involved in gene regulation and protein signaling (
emmert2014gene). Further, it provided the impetus for the develoment of numerous approaches for tackling the problem  see for example the review paper marbach2012wisdom and references therein.This is a challenging learning problem in its general form. It stems from the fact that in order to reconstruct a DAG from data, one has to consider all possible orderings of nodes and score the resulting network structures based on evidence gleaned from the available data. The computational complexity of obtaining all possible orderings of a set of nodes in a directed graph is exponential in the size of the graph. In certain applications, one may have access to a complete topological ordering, which renders the problem computationally tractable as discussed below. An interesting question arises of what advantages, the availability of external information on partial orderings of nodes, brings to solving the problem. This is the key issue addressed in this study, motivated by an application on dairy operations, where such information can be reliably obtained from operators as explained next.
Dairy cattle operations are characterized by complex interactions between several factors which determine the success of these systems. Most of these operationsresult in the collection of large amounts of data that are usually analyzed using univariate statistical models for certain variables of interest; therefore, information from relationships between these variables is ignored. In addition, due to the structure of a dairy cattle agroecosystem, it is of great interest to carry out data analysis that permits to implement a systemic approach (Jalvingh:1992; Thornley:France:2007). Moreover, due to their high relevance when making management decisions and recommendations, knowing not only interaction patterns, but also causal relationships between the components of dairy production systems, is a problem of current interest, and has the potential to have a marked impact on the dairy industry. In any of these systems (such as the data analyzed in Section 4), causal relationships between selected pairs of variables are reliably known. The statistical task then, is to leverage these known relationships and the observed data to estimate the underlying network of relationships between the variables under consideration.
In particular, suppose
are i.i.d. random vectors from a multivariate distribution with mean
and covariance matrix . In sample deprived settings, an effective and popular method for estimating imposes sparsity on the entries of (covariance graph models), or (graphical models), or appropriate Cholesky factors of(directed acyclic graph models or Bayesian networks). The choice of an appropriate model often depends on the application.
In this study, our focus is on learning DAG from highdimensional data, assuming
sparsity in an appropriate Cholesky factor of . In particular, consider the factorization of the inverse covariance matrix , where can be converted to a lower triangular matrix with positive diagonal entries through a permutation of rows and columns. The sparsity pattern of gives us a directed acyclic graph (DAG), which is defined as where and . The assumption corresponds to assuming an appropriate conditional independence under Gaussianity, and corresponds to assuming an appropriate conditional correlation being zero in the general setting.There are two main lines of work that have dealt with DAG estimation in the Gaussian framework  one where the permutation that makes B lower triangular is known and one where it is unknown. We brifely discuss them below. As previously mentioned, nn many applications, a natural ordering, such as time based or location based ordering, of the variables presents itself, and hence a natural choice for the permutation which makes lower triangular is available. Penalized likelihood methods, which use versions of the penalty, and minimize the respective objective functions over the space of lower triangular matrices, have been developed in HLPL:2006; Shajoie:Michalidis:2010; KORR:2017. Bayesian methods for this setting have been developed in CKG:2017; Altamore:Consonni:2013; Consonni:2017. For many of these methods, high dimensional consistency results for the model parameters have also been established.
If the permutation/ordering that makes lower triangular is unknown, then the problem becomes significantly more challenging, both computationally and theortically. In this setting, several score based, constraintbased and hybrid algorithms for estimating the underlying CPDAG ^{1}^{1}1the permutation is not identifiable from observational data, but one can recover an equivalence class of DAGs, refered to as completed partially DAG or CPDAG, from observational data. have been developed and studied in the literature (PCR:2001; GH:2013; LB:1994; H:1995; C:2003; EW:2008; Z:2011; KB:2007; T:2006; G:2011; G:2012; vandeGeer:Buhlmann:2013). See Aragam:Zhou:2015 for an excellent and detailed review. Recently, Aragam:Zhou:2015 have developed a penalized likelihood approach called CCDr for sparse estimation of , which has been shown to be significantly more computationally scalable than previous approaches.
However, in some applications, such as the dairy cattle data studied in Section 4, domainspecific information regarding the variables in available, which allows for a partition of the variables into sets such that any possible edge from a vertex in to a vertex in is directed from to if . However, the ordering of the variables in the same subset is not known, and has to be inferred from the data. This setting is not subsumed in the previous approaches mentioned above, and falls somewhere in the middle of the two extremes of having complete information regarding the ordering, and having no information regarding the ordering. The goal of this paper is to develop a hybrid approach for DAG estimation which leverages the partition based partial ordering information. We will also show that using the partition information leads to a reduction in the number of computations, and more importantly allows for parallel processing, unlike CCDr. This can lead to significant improvement in computational speed and statistical performance.
The remainder of the paper is organized as follows. In Section 2, we develop, and describe in detail, our hybrid algorithm called PartitionDAG. In Section 3, we perform a detailed experimental study to evaluate and understand the performance of the proposed algorithm. In Sections 3.1, 3.2 and 3.3, we use known DAGs from the DREAM3 competition (dream1; dream2; dream3) and perform an extensive simulation study to explore the effectiveness/ability of PartitionDAG to incorporate the ordering information, and how this ability changes with more/less informative partitions. In Section 3.4, we perform a similar simulation study, this time using randomly generated DAGs with more number of variables. Finally, in Section 4 we analyze dairy cattle data recently gathered by Universidad Nacional de Colombia using the proposed PartitionDAG approach.
2 DAG estimation using partition information and the corresponding PartitionDAG algorithm
In order to understand how one can leverage the partial ordering information, it is crucial to understand the workings, similarities, and differences of the Concave penalized Coordinate Descent with reparameterization (CCDr) Aragam:Zhou:2015 and the Convex Sparse Cholesky Selection (CSCS) KORR:2017 algorithms, which are state of the art in terms on computational scalability and convergence for the boundary settings with completely unknown and completely known variable ordering respectively.
The CSCS algorithm is derived under the setting where a domainspecific ordering of the variables which makes lower triangular is known. Hence, the DAG estimation problem boils down to estimating the sparsity pattern in , the lower triangular permuted version of . In other words, is a lower triangular matrix with positive diagonal entries such that . The objective function for CSCS is
where is the sample covariance matrix. The first two terms in correspond to the Gaussian loglikelihood, and the third term is an penalty term which induces sparsity in the lower triangular matrix .
The CCDr algorithm is derived under the setting where there is no knowledge about the permutation that makes lower triangular. Here , where varies over the space of matrices such that by permuting the rows and columns of we can reduce it to a lower triangular matrix with positive diagonal entries. More formally, denoting to be the group of permutations of , we assume that , where
The CCDr method uses the following objective function:
Exactly like CSCS the first two terms correspond to the Gaussian loglikelihood, while the third term tries to impose sparsity on .
While objective functions for CSCS and CCRr look identical, the algorithms to minimize the two objective functions are very different due to the fact that we are minimizing over different spaces. Both objective functions are convex, but CSCS has the added advantage that the range for , which is the set of lower triangular matrices with positive diagonal entries, is convex as well. This leads to a convex problem (though not strictly convex for ) and we can establish that the sequence of iterates converges to a global minimum of the objective function. However, the range for , which is the set of matrices that can be converted to a lower triangular matrix with positive diagonal entries through a permutation of the rows and columns, is not convex and general results in the literature (at best) only guarantee convergence of the sequence of iterates to a local minimum of the objective function. In addition, while CSCS can be broken down into
parallelizable problems the same can not be said for CCDr, which leads to a significant computational disadvantage for CCDr. Finally, asymptotic consistency for the general setting (with no restrictions on conditional variances) for CCDr isn’t available as yet, whereas Theorem 4.1 in
KORR:2017 establishes both model selection and estimation consistency for CSCS. See KORR:2017 for a more detailed comparison between these two algorithms.As stated in the introduction, in many applications, additional data can give information about partitions of the variables where we have prior knowledge about the direction of the edges between partitions, but not within partitions (for example, the dairy cattle data in Section 4, or gene knockout data or more general perturbations data SJKM:2014). We will now discuss how one can create a hybrid algorithm from CSCS and CCDr where we incorporate this information for DAG estimation.
2.1 The case with two partition blocks
For simplicity we will initially work with the case where the variables, , are divided into two groups and such that we cannot have an edge from a node in to one in , but can have one from a node in to one in . Hence, we have that . This implies that B has the block triangular form
(2.1) 
The diagonal blocks and are constrained so that each matrix is a permuted version of lower triangular matrices, i.e., and , the entries of the offdiagonal block are all zero. However, there are no constraints on the offdiagonal block . Similar to CCDr, we consider a Gaussian loglikelihood based objective function, denoted by , given by
Here, our goal is to minimize the above function over the space , defined by
as opposed to CCDr, where the goal is to minimize over the space . Note that since and are not convex sets, is also not a convex set.
2.1.1 A roadmap for the algorithm
As in CCDr and CSCS, we pursue a coordinatewise minimization approach. At each iteration, we cycle through minimizing with respect to each nontrivial element of (fixing the other entries at their current values). The minimizing value is then set as the new value of the corresponding element. We repeat the iterations until the difference between the values at two successive iterations falls below a userdefined threshold.
Hence, for implementing coordinatewise minimization, we need to understand how to minimize with respect to an arbitrary element of given all the other elements. Using straightforward calculations, we get
(2.2) 
Given the nature of the constraints on each block of , we consider three different cases.

Case I: (Diagonal entries  ) It follows from (2.2) that is the sum of quadratic and logarithmic terms in a given diagonal entry (treating other entries as fixed). In particular,
Simple calculus shows that the unique minimizer (with respect to ) of the above function is given by
(2.3) 
Case II: (Offdiagonal entries in , the CSCS case) Consider , where and . Since , it follows from (2.2) that is the sum of quadratic and absolute value terms in (treating other entries as fixed). In particular,
(2.4) Simple calculus shows that the unique minimizer (with respect to ) of the above function is given by
(2.5) where . This step exactly resembles Case a typical step of the CSCS algorithm.

Case III: (Offdiagonal entries in and , the CCDr case) Consider , where or . Since and , it follows that at most one of or is nonzero. So as in CCDr Aragam:Zhou:2015, we will jointly minimize as a function of . This can be done as follows. If adding a nonzero value for violates the DAG constraint, or equivalently the constraint that and , then we set , and then minimize as a function of and update the entry as specified in (2.4), (2.5) with the roles of and exchanged). If adding a nonzero value for violates the DAG constraint, then we set , and then minimize as a function of and update the entry as specified in (2.4), (2.5). However, it is possible that neither or violates the DAG constraint. In that case, we compute and using appropriate versions of (2.5), pick the one that makes a larger contribution towards minimizing , and set the other one to zero. This step exactly resembles a typical step of the CCDr algorithm.
The resulting coordinatewise minimization algorithm for , called PartitionDAG, which repeatedly iterates through all the entries of based on the three cases discussed above, is provided in Algorithm LABEL:alg:pdag. Case II and Case III, which correspond to typical steps of the CSCS and CCDr algorithm respectively, demonstrate why we regard the PartitionDAG algorithm as a hybrid of these two algorithms.
2.2 The case with multiple partition blocks
Algorithm LABEL:alg:pdag can be easily generalized to the case where the variables are partitioned into blocks, say, , such that any edge from a node in to a node in is directed from to , if . However, the ordering within each is not known. In particular, let , where and . Under these constraints, the matrix has a block lower triangular structure, which can be denoted as follows.
(2.6) 
In particular, the parameter lies in the space given by
We again use a coordinatewise minimization approach for minimizing over . Similar to the two partition block case, the coordinatewise minimizations can be divided into three cases.

The first case deals with a diagonal entries , and the unique minimizer has exactly the same form as in (2.3).

The second case (the CSCS case) deals with offdiagonal entries which belong to one of the lower triangular blocks with , and the unique minimizer has exactly the same form as in (2.4).

Finally, the third case (the CCDr case) deals with offdiagonal entries which belong to one of the diagonal blocks with , and the unique minimizer has eactly the same form as in (2.5). The algorithm, using the steps described above, is provided as Algorithm LABEL:alg:pdagR in the appendix.
Note that while is jointly convex in , the domain of minimization is not a convex set. Hence, to the best of our knowledge, existing results in the literature do not imply convergence of the coordinatewise minimization algorithm (Algorithm LABEL:alg:pdagR). Using standard arguments (for example, similar to Theorem 4.1 of Tseng:2001), the following result can be established.
Lemma 2.1
Assuming that all the diagonal entries of are positive, any cluster point of the sequence of iterates produced by Algorithm LABEL:alg:pdagR is a stationary point of in .
2.3 Advantages of PartitionDAG
We now discuss some of the computational and statistical advantages of using the partition based ordering information in the DAG estimation algorithms derived in this paper.

(Parallelizability) Consider the general multiple block case described in Section 2.2. After some manipulations, it can be shown that the objective function can be decomposed as a sum of functions, where each function exclusively uses entries from a distinct block row of . In particular, it can be shown that
where
only depends on the terms in block row . As a result, the minimization of each block row can be implemented in parallel as shown in Algorithm LABEL:alg:pdagR. This can lead to huge computational advantages, as illustrated in our experiments.

(Number of computations) With the additional partition information, many of the entries in are automatically set to zero. This reduces the number of computations PartitionDAG needs to do in comparison to CCDr. In addition, many of the computations we carry out for PartitionDAG fall under Case II as discussed in Section 2.1, which is much simpler and faster than computations under Case III, which is what CCDr needs to do for every single coordinate.

(Estimation Accuracy) This is very obvious, but leveraging more information can lead to a improved estimation accuracy as borne out in our simulations studies.
3 Simulation experiments
In this section, we perform extensive simulations using four algorithms: CCDr (no ordering information is assumed to be known), PC algorithm (see Kalisch:Buhlmann:2007, no ordering information is assumed to be known), the proposed PartitionDAG (partial ordering information is assumed to be known), and CSCS (assumes full ordering is known). The goal is to understand/explore the following questions about the PartitionDAG algorithm in realistic settings.

Can ParititionDAG effectively leverage the partial ordering information to improve performance (as compared to methods such as CCDr and PC algorithm which do not use any ordering information)?

As the partitions become finer, does the performance of PartitionDAG improve?

If the number of sets in the partition is kept the same, but the elements of these sets are changes so that the partition is more informative, then does the performance of PartitionDAG improve?

How does the computational speed of PartitionDAG compare to CCDr, and how does this change when the partition becomes finer?
We investigate each of these questions separately in the subsections below. The testing based PC algorithm is much slower than the penalized algorithms (CSCS, PartitionDAG, CCDr), but serves as a popular benchmark, and hence is included in the first two subsections below when the number of network nodes is . In Section 3.4, we consider networks with and nodes, and the PC algorithm can be very slow in these settings. Hence, we do not include it in Section 3.4.
3.1 Partial ordering info vs. no ordering info: DREAM3 data
The goal of this experiment is to explore if partition based ordering information can help improve accuracy and computational efficiency in realistic settings. With this in mind, we perform a number of simulation studies using gene regulatory networks from the DREAM3 In Silico challenge dream1; dream2; dream3. This challenge provides the transcriptional networks for three yeast organisms, which we will denote as Yeast 1, Yeast 2, Yeast 3. These networks mimic activations and regulations that occur in gene regulatory networks. All networks are known and have 50 nodes.
For each DAG, we generated a random
by sampling the offdiagonal nonzero terms from a uniform distribution between 0.3 and 0.7 and assigned them a positive or negative sign with equal probability. The diagonal terms were all set to 1. Then, the “true”
was computed, and the corresponding multivariate Gaussian distribution was used to generate twenty datasets each for sample size
. For each sample size, each of the three algorithms (PC, CCDr, PartitionDAG) was run for each dataset described above for a range of penalty parameter values (for the PC algorithm, the significance level for the hypothesis tests was used as a penalty parameter). Note that the DAG estimation problem is a three way classification problem (two classes corresponding to two kinds of directed edges, and one class corresponding to no edge). Hence, the performance was summarized using the corresponding mean AUCMA (Macroaveraged AreaUndertheCurve, see for example TKV:2010) value over the twenty repetitions. For each method, the appropriate penalty parameter (or significance level in the case of the PC algorithm) was varied over a range of 30 values to yield a completely sparse network at one end, and a completely dense network (ignoring edge directions) at the other end. The AUC values for the three binary classification problems (corresponding to each class) were then computed and normalized by dividing with the respective false positive rate (FPR) range. The AUCMA was then computed by taking the average of the three classwise AUC values. The results for the three different networks (Yeast1, Yeast2, Yeast3) are summarized in Table 1.As expected, the performance of each method improves with increasing sample size. It is clear that PartitionDAG outperforms both the PC algorithm and the CCDr algorithm for all the DAGs by quite a large margin and the performance is more or less consistent regardless of the complexity of the DAG topology. This demonstrates that the PartitionDAG algorithm can successfully leverage the partial ordering information to improve performance.
Method  Sample size  AUCMA (Yeast 1)  AUCMA (Yeast 2)  AUCMA (Yeast 3) 

CCDr  40  0.5097  0.4413  0.4397 
PartitionDAG  40  0.6470  0.5842  0.5619 
PC  40  0.4743  0.4160  0.4211 
CCDr  50  0.5242  0.4505  0.4489 
PartitionDAG  50  0.6548  0.6108  0.5767 
PC  50  0.5009  0.4273  0.4278 
CCDr  100  0.5624  0.4815  0.4716 
PartitionDAG  100  0.6830  0.6302  0.6004 
PC  100  0.5470  0.4491  0.4576 
CCDr  200  0.5884  0.4943  0.4895 
PartitionDAG  200  0.7070  0.6544  0.6150 
PC  200  0.5795  0.4780  0.4822 
3.2 Fine partition vs. coarse partition: Yeast 3 network
Note that given a partition, PartitionDAG assumes the ordering of edges between the sets in the partition to be known, and the ordering of edges within each partition set to be unknown). The goal of this subsection is to explore if using a finer partition (and hence more knowledge of the ordering) improves the performance of the PartitionDAG approach. We perform simulations using the Yeast 3 data from the DREAM3 challenge mentioned in Section 3.1. Recall that the underlying network is known, mimics activations and regulations that occur in gene regulatory networks and has nodes. We topologically order the nodes from to , so that any edge directs from a smaller node to a bigger node. Again, we construct a “true” matrix consistent with the known DAG, and generated hundred multivariate datasets each for sample size . We analyze each of the datasets thus generated using four methods: PartitionDAG with a partition consisting of two sets: one with nodes to , and the other one with nodes to (refered to as PDAG2), PartitionDAG with a partition consisting of three sets: one with the first nodes, one with nodes to , and one with nodes to (refered to as PDAG3), PartitionDAG with a partition consisting of four sets with one with the first nodes, one with nodes to , one with nodes to , and one with nodes to . The Macroaveraged AreaUndertheCurve (AUCMA) values for each algorithm are provided in Table 2.
Again, we see that as expected, the performance of each method improves with increasing sample size. Also, as more information about the ordering becomes available with finer partitions, the performance of PartitionDAG clearly improves.
Method  Sample size  AUCMA 

PC  40  0.4163 
CCDR  40  0.4381 
PDAG2  40  0.5714 
PDAG3  40  0.5740 
PDAG4  40  0.5970 
PC  50  0.4247 
CCDR  50  0.4420 
PDAG2  50  0.5792 
PDAG3  50  0.5970 
PDAG4  50  0.6189 
PC  100  0.4540 
CCDR  100  0.4597 
PDAG2  100  0.5987 
PDAG3  100  0.6069 
PDAG4  100  0.6393 
PC  200  0.4756 
CCDR  200  0.4757 
PDAG2  200  0.6112 
PDAG3  200  0.6277 
PDAG4  200  0.6585 
3.3 Informative vs. Noninformative partition: Yeast 1 network
In this section, we compare the performance of PartitionDAG using two different partitions for the Yeast 1 network. Recall that this network has nodes. We topologically order the nodes from to so that any edge in the true network directs from a smaller node to a larger node. The first partition consists of two sets: and its complement. The second partition also consists of two sets: and its complement. The partitions are constructed such that in the true graph, any edge between and directs from to for . The first partition is more “informative” in the sense that in the true network, more edges exist between and as compared to edges between and . Similar to earlier subsections we generated hundred multivariate datasets (from a multivariate Gaussian consistent with the true network) each for sample size , and applied PartitionDAG with the two different partitions discussed above (referred to as PDAGINFO and PDAGNONINFO) for a range of penalty parameter values. The Macroaveraged AreaUndertheCurve (AUCMA) values are provided in Table 3. These results show that the performance of PartitionDAG improves as we go from an noninformative ordering to an informative ordering, but the difference between the two AUC values grows smaller with increasing sample size.
Method  Sample size  AUCMA 

PDAGINFO  40  0.6272 
PDAGNONINFO  40  0.5864 
PDAGINFO  50  0.6321 
PDAGNONINFO  50  0.6025 
PDAGINFO  100  0.6617 
PDAGNONINFO  100  0.6263 
PDAGINFO  200  0.6801 
PDAGNONINFO  200  0.6401 
3.4 Scaling the number of nodes and computational time
In this section, we consider networks with higher number of nodes ( and ), and demonstrate that the parallelizability for PartitionDAG helps improve computational speed as well as statistical accuracy as the partitions grow finer. To this end, we generated a “true” matrix of size with a random sparsity pattern of about 95% following DAG restrictions and conditions similar to those mentioned in previous subsections. We then set
and generated 20 datasets with multivariate normal distribution with mean 0 and variance
. The nodes were topologically ordered and partitioned into 4 sets of equal size: . For each dataset, the following algorithms were used: PartitionDAG with two sets and (referred to as PDAG2), PartitionDAG with three sets and (referred to as PDAG3), PartitionDAG with four sets and (refered to as PDAG4) and the PC algorithm. Table 4 provides the average wallclock time needed for each algorithm. It is clear that the time improves drastically as we consider finer partitions, which is partly due to the fact that we are doing more of the processing in parallel for finer partitions. Table 5 provides the AUCMA values, and as expected PDAG4 performs the best, followed by PDAG3 and then PDAG2.p  n  PDAG2  PDAG3  PDAG4  PC 

100  50  4.5569  2.4027  0.8683  13.6724 
100  75  4.1270  2.1949  0.7903  23.1989 
100  100  3.601  1.8453  0.7246  29.4618 
100  200  3.1554  1.7055  0.6788  65.4514 
200  100  112.2623  57.6974  21.2002  110.9797 
200  150  117.5930  57.9034  19.3711  131.0355 
200  200  109.9690  53.0327  19.6453  150.8217 
200  400  98.9042  51.0516  20.1499  186.0643 
p  n  PDAG2  PDAG3  PDAG4  PC 

100  50  0.6777  0.7023  0.7260  0.5107 
100  75  0.7083  0.7365  0.7586  0.5596 
100  100  0.7231  0.7582  0.7799  0.5921 
100  200  0.7462  0.7686  0.7906  0.6565 
200  100  0.6870  0.7167  0.7323  0.4668 
200  150  0.7058  0.7382  0.7567  0.4917 
200  200  0.7168  0.7475  0.7646  0.5068 
200  400  0.7364  0.7660  0.7840  0.5352 
4 Analysis of dairy cattle data
4.1 Data background
In a recent research project led by the Universidad Nacional de Colombia and aimed to increase productivity levels in hightropic dairy operations, data on economic and biological variables associated with dairy cattle operations were collected from hightropic dairy farms in the municipality of Guatavita, Department of Cundinamarca, Colombia spanning the period from June, 2016 to August, 2017. A list of the variables, along with associated acronyms is provided in Table 6 . Based on domainspecific knowledge, this set of variables can be split into several groups according to causal relationships, such that the causal relationship between variables from different groups is known, but between variables from the same group is unknown. Therefore, PartitionDAG is really wellsuited for analyzing this dataset, as it allows us to effectively incorporate this information.
SYMBOL  DETAILED MEANING 

“PGR”  Pasture growth rate 
“AP”  Total pasture (hectare) area of each herd 
“SR”  Stocking rate, i.e., no of individuals per hectare 
“OF”  Amount of offered forage (kg per individual) 
“AFI”  Average forage intake (kg per individual) 
“TFI”  Average total (forage + supp) feed intake (kg per individual) 
“AMC”  Average of milking cows in the herd 
“TCH”  Total number of cows in the herd 
“AMY”  Average milk yield (lt) per cow per day 
“ATS”  Average total solids of milk (%) 
“NW”  Number of workers in the herd per month 
“SM”  Amount of sold milk (lt) per month 
“PM”  Amount of milk (lt) produced per month 
“TMS”  Total solids of milk (kg) produced per month 
“MH”  Amount of milk per hectare per month 
“TS”  Total solids produced per hectare per day 
“TSC”  Total solids produced per cow per month 
“AMW”  Amount of milk (lt) per worker per month 
“TSW”  Total solids of milk (kg) per worker per month 
“CSC”  Cost (Colombian pesos) of soil correction strategies 
“PMC”  Nutritional and pasture management cost (Colombian pesos) 
“RMC”  Reproductive management cost (Colombian pesos) 
“SMC”  Sanitary management cost (Colombian pesos) 
“BCC”  Investments and bank credits cost (Colombian pesos) 
“TAX”  Taxes (Colombian pesos) 
“EVM”  Economic value of milk (Colombian pesos per litter) 
“TC”  Total cost 
“MSI”  Total income from milk sellings (Colombian pesos) 
“AI”  Additional income (Colombian pesos) 
“MPC”  Milk production cost per litter 
“TI”  Total income (Colombian pesos) 
“WC”  Cost per worker per month (Colombian pesos) 
“CC”  Cost per cow per month (Colombian pesos) 
“CH”  Cost per hectare per month (Colombian pesos) 
“NI”  Net income (Colombian pesos) 
“WI”  Income per worker per month (Colombian pesos) 
4.2 Results and discussion
The variables were divided into 10 groups on the basis of domainspecific knowledge by an expert in dairy science (Dr. Carulla, coauthor). Specifically, knowledge on the hierarchical structure of a grazing dairy cattle operation was used EDT:2006. Colombian dairy operations are based on grasslands; thus, animals are mostly fed fresh forages. This hierarchy follows the natural production flow, it all starts at the soil and environmental (temperature, light, rain) levels, which highly affect the amount and quality of forage Dillon:2006. Then, several variables associated to pasture management, cows’ supplementation, and their genetic makeup determine efficiency in the complex process of transforming forage into milk BMKD:2003. Since an animal production system is a business, there is always interest in maximizing profit, which is explained by a large number of variables and their interaction, but is summarized by a simple number: net income. Therefore, at the end of the hierarchy, that is, in the final causal groups, we find economic variables, basically, costs and incomes.
Consequently, pasture growth rate was in the first causal group because this variable can be thought of as an output of the interaction between soil, environment, pasture genetics and pasture management and it highly determines the stocking rate (number of individuals or live weight per unit of area). Along with pasture growth rate, total grazing area defines the number of cows a herd can hold; as a result, the first group comprised of these two variables. Stocking rate is computed as the total number of individuals or total live weight in the herd divided by total grazing area; hence, stocking rate was assigned to the second group. The following groups contain variables associated to forage allowance and total feed intake, milk yield, and resources used in milk production such as number of workers. The last group comprises of two relevant economic variables: total net income and net income per worker.
The penalty parameter for the PartitionDAG approach was chosen to obtain an approximate edge density around (this density requirement was defined using preliminary analysis). The estimated network (DAG) is depicted in Figure 1.
In order to understand/illustrate the difference in the performance of PartitionDAG with changes in the allocation of variables to partitions, we merged some of the groups to obtain a partition with groups, and again estimated a causal network using PartitionDAG. The estimated network with five groups is depicted in Figure 2.
In addition to this partial ordering information, some relationships were known to exist; specifically, edges in total are known based on background information dictated by animal sciences theory, but also because some variables were explicitly computed using others. For the sake of clarity, we provide selected example next. It is well known that total dry matter intake () has a direct impact on milk yield () HPS:2004; in addition, the sign of this relationship is known to be positive. This last fact takes importance when evaluating the corresponding entries of the estimated Cholesky factor of the precision matrix (given their regression interpretation). Moreover, for economic variables it is easier to know what edges should exist in the estimated DAG because many economic indices are computed as linear combinations of other variables in the system. For instance, total production cost () is the sum of partial costs such as reproductive management cost () and cost of soil correction strategies (). Another example is net income () which is the difference between total income () and total cost ().
Since this extra information was not used by PartitionDAG, it allows us to carry out a partial evaluation of the performance of the proposed method. To this end, for the group case, out of the edges () were present in the estimated network, while in the group case, out of the edges () were present in the estimated network. The specific list of these edges and those present in the two estimated networks is given in Table 7.
Edge ()  Five Groups network  Ten Groups network 

( NW , AMW )  Pres  Pres 
( PM , AMW )  Abs  Pres 
( TFI , AMY )  Abs  Abs 
( TC , CC )  Abs  Pres 
( TCH , CC )  Abs  Abs 
( AP , CH )  Abs  Abs 
( TC , CH )  Pres  Pres 
( NW , CSC )  Pres  Pres 
( TMS , CSC )  Pres  Abs 
( AP , MH )  Abs  Abs 
( PM , MH )  Abs  Pres 
( SR , MH )  Abs  Abs 
( PM , MPC )  Abs  Pres 
( TC , MPC )  Abs  Pres 
( EVM , MSI )  Pres  Pres 
( TC , NI )  Pres  Pres 
( TI , NI )  Pres  Pres 
( SR , OF )  Pres  Abs 
( AMY , PM )  Abs  Abs 
( PMC , TC )  Abs  Pres 
( SR , TCH )  Abs  Abs 
( AFI , TFI )  Abs  Abs 
( MSI , TI )  Pres  Pres 
( ATS , TMS )  Abs  Abs 
( AP , TS )  Abs  Abs 
( TMS , TS )  Pres  Abs 
( TMS , TSC )  Pres  Abs 
( NW , WC )  Pres  Pres 
( TC , WC )  Abs  Pres 
( NW , WI )  Pres  Pres 
( TI , WI )  Pres  Pres 
Also, a dairy science expert (Dr. Carulla, coauthor) inspected all the estimated edges for each network, and identified which of the estimates edges were not meaningful. The following are some examples of incorrectly inferred edges because a direct causal relationship between the corresponding pair of variables does not exist: the economic value of milk () and production cost per liter (), pasture growth rate (Growth rate) and total pasture area (), and growth rate and number of workers per month (). As to the first edge, total milk cost per litter is determined by total production cost and total milk yield, not by the economic value of milk. The price paid to the producer does not cause production cost. Regarding the second edge, it is not reasonable to state that total grazing area is caused by pasture growth rate. Total grazing area is explained by herd size and its productive targets (e.g., dairy and other agricultural activities), not by how fast its pastures grow. Finally, pasture growth rate cannot have a direct effect on the number of workers per month, because the later variable depends on social factors as well as on the productivity level, of course, growth rate affects ?productivity level and therefore, it could have an impact on the number of workers, but this does not imply a direct causal relationship. To summarize, for the group network, of the estimated edges were not meaningful, while for the group network, of the estimated edges were not meaningful. Once again, it can be seen that PartitionDAG is able to leverage additional external information (in the form of a finer partition) to improve performance.
The above analysis illustrates the tremendous usefulness of the application of the proposed method in dairy science and in the problem of DAG estimation more generally. In the first place, it is a sound approach to infer a DAG and the associated causal relationships from observational data in animal production systems using the available (partial) information, which is a feature exhibited by most of these agroecosystems. Further, knowledge of causal relationships in dairy and other animal production operations helps in taking management decisions and making recommendatios, since it leads to identifying what nodes have the biggest impact on the system outputs; the latter information singles out variables that should be modified and the direction of such modifications. Finally, the analysis of the estimated DAGs sheds light into interesting interactions between components of the production system and thus could be the basis to design followup experiments to fully test their validity.
Comments
There are no comments yet.