The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.

1. Danielsson, Mats

et al.

Ekenberg, Love

Hansson, Karin

Idefelt, Jim

larsson, Aron

Pahlman, Mona

Riabacke, Ari

Sundgren, David

Cross-disciplinary research in analytic decision support systems2006In: ITI 2006: Proceedings of the 28th International Conference on Information Technology Interfaces / [ed] LuzarStiffler V, Dobric VH, New York: IEEE , 2006, p. 123-128Conference paper (Refereed)

Abstract [en]

A main problem in decision support contexts is that unguided decision making is difficult and can lead to inefficient decision processes and undesired consequences. Therefore, decision support systems (DSSs) are of prime concern to any organization and there have been numerous approaches to delivering decision support from, e.g., computational, mathematical, financial, philosophical, psychological, and sociological angles. A key observation, however, is that effective and efficient decision making is not easily achieved by using methods from one discipline only. This paper describes some efforts made by the DECIDE Research Group to approach DSS development and decision making tools in a cross-disciplinary way.

2.

Sundgren, David

KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.

In decision analysis maximising the expected utility is an often used approach in choosing the optimal alternative. But when probabilities and utilities are vague or imprecise expected utility is fraught with complications. Studying second-order effects on decision analysis casts light on the importance of the structure of decision problems, pointing out some pitfalls in decision making and suggesting an easy to implement and easy to understand method of comparing decision alternatives. The topic of this thesis is such second-order effects of decision analysis, particularly with regards to expected utility and interval-bound probabilities. Explicit expressions for the second-order distributions inherent in interval-bound probabilities in general and likewise for distributions of expected utility for small decision problems are produced. By investigating these distributions the phenomenon of warping, that is concentration of belief, is studied.

In real-life decision analysis, the probabilities and values of consequences are in general vague and imprecise. One way to model imprecise probabilities is to represent a probability with the interval between the lowest possible and the highest possible probability, respectively. However, there are disadvantages with this approach, one being that when an event has several possible out-comes, the distributions of belief in the different probabilities are heavily concentrated to their centers of mass, meaning that much of the information of the original intervals are lost. Representing an imprecise probability with the distribution's center of mass therefore in practice gives much the same result as using an interval, but a single number instead of an interval is computationally easier and avoids problems such as overlapping intervals. Using this, we demonstrate why second-order calculations can add information when handling imprecise representations, as is the case of decision trees or probabilistic networks. We suggest a measure of belief density for such intervals. We also demonstrate important properties when operating on general distributions. The results herein apply also to approaches which do not explicitly deal with second-order distributions, instead using only first-order concepts such as upper and lower bounds

4.

Sundgren, David

et al.

Dept. of Mathematics, Natural and Computer Sciences, University of G¨avle,.

Ekenberg, Love

KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.

Danielsson, Mats

KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.

Software agents and humans alike face severe difficulties in making decisions in uncertain contexts. One approach is to formalise the decision situation by means of decision theory, i.e. probabilities and utilities leading to the principle of maximising the expected utility. Expected utility is here considered as a stochastic variable; under the assumption that all utility values are equally likely; and that each vector of probability values is equally likely, the probability distribution of expected utility is calculated for two, three, and four possible outcomes. The effect of these probability distributions concentrating around the middle value is explored and its significance for making decisions.

In attempting to address real-life decision problems, where uncertainty about input data prevails, some kind of representation of imprecise information is important and several have been proposed over the years. In particular, first-order representations of imprecision, such as sets of probability measures, upper and lower probabilities, and interval probabilities and utilities of various kinds, have been suggested for enabling a better representation of the input sentences. A common problem is, however, that pure interval analyses in many cases cannot discriminate sufficiently between the various strategies under consideration, which, needless to say, is a substantial problem in real-life decision making in agents as well as decision support tools. This is one reason prohibiting a more wide-spread use. In this article we demonstrate that in many situations, the discrimination can be made much clearer by using information inherent in the decision structure. It is discussed using second-order probabilities which, even when they are implicit, add information when handling aggregations of imprecise representations, as is the case in decision trees and probabilistic networks. The important conclusion is that since structure carries information, the structure of the decision problem influences evaluations of all interval representations and is quantifiable.

6. Åhlén, J.

et al.

Sundgren, David

KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.

Color correction of underwater images has been considered a difficult task for a number of reasons. Those include severe absorption of the water column, the unpredictable behavior of light under the water surface, limited access to reliable data for correction purposes, and the fact that we are only able to process three spectral channels, which is insufficient for most color correction applications. Here, the authors present a method to estimate a hyperspectral image from an RGB image and pointwise hyperspectral data. This is then used to color correct the hyperspectral underwater image and transform it back into RGB color space.

Coral reefs are monitored with different techniques in order to examine their health. Digital cameras, which provide an economically defendable tool for marine scientists to collect underwater data, tend to produce bluish images due to severe absorption of light at longer wavelengths. In this paper we study the possibilities of correcting for this color distortion through image processing. The decrease of red light by depth can be predicted by Beer's Law. Another parameter that has been taken into account is the image enhancement functions built into the camera. We use a spectrometer and a reflectance standard to obtain the data needed to approximate the joint effect of these functions. This model is used to pre-process the underwater images taken by digital cameras so that the red, green and blue channels show correct values before the images are subjected to correction for the effects of the water column through application of Beer's Law. This process is fully automatic and the amount of processed images is limited only by the speed of computer system. Experimental results show that the proposed method works well for correcting images taken at different depths with two different cameras.

8. Åhlén, J.

et al.

Sundgren, David

KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.

The natural properties of water column usually affect underwater imagery by suppressing high-energy light. In application such as color correction of underwater images estimation of water column parameters is crucial. Diffuse attenuation coefficients are estimated and used for further processing of underwater taken data. The coefficients will give information on how fast light of different wavelengths decreases with increasing depth. Based on the exact depth measurements and data from a spectrometer the calculation of downwelling irradiance will be done. Chlorophyll concentration and a yellow substance factor contribute to a great variety of values of attenuation coefficients at different depth. By taking advantage of variations in depth, a method is presented to estimate the influence of dissolved organic matters and chlorophyll on color correction. Attenuation coefficients that depends on concentration of dissolved organic matters in water gives an indication on how well any spectral band is suited for color correction algorithm.