@conference {90, title = {Aprendizaje de independencias espec{\'\i}ficas del contexto en Markov random fields}, booktitle = {XVII Congreso Argentino de Ciencias de la Computaci{\'o}n}, year = {2011}, abstract = {

Los modelos no dirigidos o Markov random fields son ampliamente utilizados para problemas que aprenden una distribuci{\'o}n desconocida desde un conjunto de datos. Esto es porque permiten representar una distribuci{\'o}n eficientemente al hacer expl{\'\i}citas las independencias condicionales que pueden existir entre sus variables. Adem{\'a}s de estas independencias es posible representar otras, las Independencias Espec{\'\i}ficas del Contexto (CSIs) que a diferencia de las anteriores s{\'o}lo son v{\'a}lidas bajo ciertos valores que pueden tomar subconjuntos de sus variables. Debido a esto son complicadas de representar y aprenderlas desde datos. En este trabajo presentamos un enfoque para representar CSIs en modelos no dirigidos y un algoritmo que las aprende desde datos utilizando tests estad{\'\i}sticos. Mostramos resultados donde los modelos aprendidos por nuestro algoritmo resultan ser mejores o comparables a modelos aprendidos por otros sin utilizar CSIs.

}, author = {Edera, Alejandro and Bromberg, Facundo} } @conference {88, title = {The Grow-Shrink strategy for learning Markov network structures constrained by context-specific independences}, booktitle = {14th edition of the Ibero-American Conference on Artificial Intelligence}, year = {2014}, month = {11/2014}, publisher = {Lecture Notes in Computer Science/Lecture Notes in Artificial Intelligence LNCS/LNAI series}, organization = {Lecture Notes in Computer Science/Lecture Notes in Artificial Intelligence LNCS/LNAI series}, address = {Santiago de Chile}, abstract = {

Markov networks are models for compactly representing complex probability distributions. They are composed by a structure and a set of numerical weights. The structure qualitatively describes independences in the distribution, which can be exploited to factorize the distribution into a set of compact functions. A key application for learning structures from data is to automatically discover knowledge. In practice, structure learning algorithms focused on "knowledge discovery" present a limitation: they use a coarse-grained representation of the structure. As a result, this representation cannot describe context-specific independences. Very recently, an algorithm called CSPC was designed to overcome this limitation, but it has a high computational complexity. This work tries to mitigate this downside presenting CSGS, an algorithm that uses the Grow-Shrink strategy for reducing unnecessary computations. On an empirical evaluation, the structures learned by CSGS achieve competitive accuracies and lower computational complexity with respect to those obtained by CSPC.

}, isbn = {978-3-319-12027-0}, url = {http://www.springer.com/computer/ai/book/978-3-319-12026-3}, author = {Edera, Alejandro and Strappa, Yanela and Bromberg, Facundo} } @article {69, title = {The IBMAP approach for Markov network structure learning}, journal = {Annals of Mathematics and Artificial Intelligence}, volume = {72}, year = {2014}, month = {04/2014}, pages = {197--223}, chapter = {197}, abstract = {

In this work we consider the problem of learning the structure of Markov networks from data. We present an approach for tackling this problem called IBMAP, together with an efficient instantiation of the approach: the IBMAP-HC algorithm, designed for avoiding important limitations of existing independence-based algorithms. These algorithms proceed by performing statistical independence tests on data, trusting completely the outcome of each test. In practice tests may be incorrect, resulting in potential cascading errors and the consequent reduction in the quality of the structures learned. IBMAP contemplates this uncertainty in the outcome of the tests through a probabilistic maximum-a-posteriori approach. The approach is instantiated in the IBMAP-HC algorithm, a structure selection strategy that performs a polynomial heuristic local search in the space of possible structures. We present an extensive empirical evaluation on synthetic and real data, showing that our algorithm outperforms significantly the current independence-based algorithms, in terms of data efficiency and quality of learned structures, with equivalent computational complexities. We also show the performance of IBMAP-HC in a real-world application of knowledge discovery: EDAs, which are evolutionary algorithms that use structure learning on each generation for modeling the distribution of populations. The experiments show that when IBMAP-HC is used to learn the structure, EDAs improve the convergence to the optimum.

}, keywords = {68T05, EDAs, Independence tests, Knowledge discovery, Markov network, Structure learning}, issn = {1012-2443}, doi = {10.1007/s10472-014-9419-5}, url = {http://dx.doi.org/10.1007/s10472-014-9419-5}, author = {Schl{\"u}ter, Federico and Bromberg, Facundo and Edera, Alejandro} } @article {91, title = {Learning Markov networks networks with context-specific independences.}, journal = { International Journal on Artificial Intelligence Tools}, volume = {23}, year = {2014}, month = {12/2014}, chapter = {1460030}, abstract = {

This work focuses on learning the structure of Markov networks from data. Markov networks are parametric models for compactly representing complex probability distributions. These models are composed by: a structure and numerical weights, where the structure describes independences that hold in the distribution. Depending on which is the goal of structure learning, learning algorithms can be divided into: density estimation algorithms, where structure is learned for answering inference queries; and knowledge discovery algorithms, where structure is learned for describing independences qualitatively. The latter algorithms present an important limitation for describing independences because they use a single graph; a coarse grain structure representation which cannot
represent flexible independences. For instance, context-specific independences cannot be described by a single graph. To overcome this limitation, this work proposes a new alternative representation named canonical model as well as the CSPC algorithm; a novel knowledge discovery algorithm for learning canonical models by using context-specific independences as constraints. On an extensive empirical evaluation, CSPC learns more accurate structures than state-of-the-art density estimation and knowledge discovery algorithms. Moreover, for answering inference queries, our approach obtains competitive results against density estimation algorithms, significantly outperforming knowledge discovery algorithms.

}, keywords = {CSI models, Knowledge discovery, Markov networks; structure learning; context-specific independences}, issn = {ISSN: 0218-2130}, doi = {10.1142/S0218213014600306}, url = {http://www.worldscientific.com/doi/abs/10.1142/S0218213014600306}, author = {Edera, Alejandro and Schl{\"u}ter, Federico and Bromberg, Facundo} } @article {87, title = {Markov random fields factorization with context-specific independences}, year = {2013}, institution = {arXiv.org}, abstract = {

Markov random fields provide a compact representation of joint probability distributions by representing its independence properties in an undirected graph. The well-known Hammersley-Clifford theorem uses these conditional independences to factorize a Gibbs distribution into a set of factors. However, an important issue of using a graph to represent independences is that it cannot encode some types of independence relations, such as the context-specific independences (CSIs). They are a particular case of conditional independences that is true only for a certain assignment of its conditioning set; in contrast to conditional independences that must hold for all its assignments. This work presents a method for factorizing a Markov random field according to CSIs present in a distribution, and formally guarantees that this factorization is correct. This is presented in our main contribution, the context-specific Hammersley-Clifford theorem, a generalization to CSIs of the Hammersley-Clifford theorem that applies for conditional independences.

}, issn = {arXiv:1306.2295}, author = {Edera, Alejandro and Bromberg, Facundo and Schl{\"u}ter, Federico} } @mastersthesis {207, title = {Aprendizaje de redes de Markov basado en independencias espec{\'\i}ficas del contexto: formalizaci{\'o}n y aplicaci{\'o}n a datos espaciales}, year = {2016}, author = {Edera, Alejandro} }