This work presents the development of a wireless sensor network (WSN), based on IEEE-802.15.4, in order to be used for frost characterization in precision agriculture by measuring temperature. Our key objective is to reduce the power consumption of the network to the minimum, allowing several measurement points per node and the remote monitoring of the sensors behaviour. For the communication interface between a WSN node and the sensors, we have developed a serial protocol inspired in SDI-12. Preliminary results show a low-cost and low-power WSN. The user can access and use the data for agronomic research.

}, isbn = {978-1-4799-4270-1}, doi = {10.1109/ARGENCON.2014.6868546}, url = {http://dx.doi.org/10.1109/ARGENCON.2014.6868546}, author = {Diedrichs, Ana Laura and Tabacchi, Germ{\'a}n and Grunwaldt, Guillermo and Pecchia, Mat{\'\i}as and Mercado, Gustavo and Gonzalez Antivilo, Francisco} } @proceedings {173, title = {Learning Markov Network Structure using Few Independence Tests.}, journal = {SIAM Data Mining}, year = {2008}, pages = {680--691}, abstract = {In this paper we present the Dynamic Grow-Shrink Inference-based Markov network learning algorithm (abbreviated DGSIMN), which improves on GSIMN, the state-of-the-art algorithm for learning the structure of the Markov network of a domain from independence tests on data. DGSIMN, like other independence-based algorithms, works by conducting a series of statistical conditional independence tests toward the goal of restricting the number of possible structures to one, thus inferring that structure as the only possibly correct one. During this process, DGSIMN, like the GSIMN algorithm, uses the axioms that govern the proba bilistic independence relation to avoid unnecessary tests i.e.,tests that can be inferred from the results of known ones. This results in both efficiency and reliability advantages over the simple application of statistical tests. However, one weakness of GSIMN is its rigid and heuristic ordering of the execution of tests, which results in potentially inefficient execution. DGSIMN instead uses a principled strategy, dynamically selecting the locally optimal test that is expected to increase the state of our knowledge about the structure the most. This is done by calculating the expected number of independence facts that will become known (through inference) after executing a particular test (before it is actually evaluated on data), and by selecting the one that is expected to maximize the number of such inferences, thus avoiding their potentially expensive evaluation on data. As we demonstrate in our experiments, this results in an overall decrease in the computational requirements of the algorithm, sometimes dramatically, due to the decreased the number of tests required to be evaluated on data. Experiments show that DGSIMN yields savings of up to 88\% on both sampled and benchmark data while achieving similar\ or better accuracy in most cases.

},
isbn = { 978-1-61197-278-8},
issn = {978-0-89871-654-2},
doi = { 10.1137/1.9781611972788.62},
url = {http://epubs.siam.org/doi/abs/10.1137/1.9781611972788.62},
author = {Gandhi, Parichey and Bromberg, Facundo and Margaritis, Dimitris}
}
@article {91,
title = {Learning Markov networks networks with context-specific independences.},
journal = { International Journal on Artificial Intelligence Tools},
volume = {23},
year = {2014},
month = {12/2014},
chapter = {1460030},
abstract = {This work focuses on learning the structure of Markov networks from data. Markov networks are parametric models for compactly representing complex probability distributions. These models are composed by: a structure and numerical weights, where the structure describes independences that hold in the distribution. Depending on which is the goal of structure learning, learning algorithms can be divided into: density estimation algorithms, where structure is learned for answering inference queries; and knowledge discovery algorithms, where structure is learned for describing independences qualitatively. The latter algorithms present an important limitation for describing independences because they use a single graph; a coarse grain structure representation which cannot

represent flexible independences. For instance, context-specific independences cannot be described by a single graph. To overcome this limitation, this work proposes a new alternative representation named canonical model as well as the CSPC algorithm; a novel knowledge discovery algorithm for learning canonical models by using context-specific independences as constraints. On an extensive empirical evaluation, CSPC learns more accurate structures than state-of-the-art density estimation and knowledge discovery algorithms. Moreover, for answering inference queries, our approach obtains competitive results against density estimation algorithms, significantly outperforming knowledge discovery algorithms.