Change search
Refine search result
1234 1 - 50 of 165
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Al Moubayed, Samer
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT.
    Beskow, Jonas
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT.
    Öster, Ann-Marie
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT.
    Salvi, Giampiero
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT.
    Granström, Björn
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT.
    van Son, Nic
    Ormel, Ellen
    Virtual Speech Reading Support for Hard of Hearing in a Domestic Multi-Media Setting2009In: INTERSPEECH 2009: 10TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2009, BAIXAS: ISCA-INST SPEECH COMMUNICATION ASSOC , 2009, p. 1443-1446Conference paper (Refereed)
    Abstract [en]

    In this paper we present recent results on the development of the SynFace lip synchronized talking head towards multilinguality, varying signal conditions and noise robustness in the Hearing at Home project. We then describe the large scale hearing impaired user studies carried out for three languages. The user tests focus on measuring the gain in Speech Reception Threshold in Noise when using SynFace, and on measuring the effort scaling when using SynFace by hearing impaired people. Preliminary analysis of the results does not show significant gain in SRT or in effort scaling. But looking at inter-subject variability, it is clear that many subjects benefit from SynFace especially with speech with stereo babble noise.

  • 2.
    Al Moubayed, Samer
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    De Smet, Michael
    Van Hamme, Hugo
    Lip Synchronization: from Phone Lattice to PCA Eigen-projections using Neural Networks2008In: INTERSPEECH 2008: 9TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2008, BAIXAS: ISCA-INST SPEECH COMMUNICATION ASSOC , 2008, p. 2016-2019Conference paper (Refereed)
    Abstract [en]

    Lip synchronization is the process of generating natural lip movements from a speech signal. In this work we address the lip-sync problem using an automatic phone recognizer that generates a phone lattice carrying posterior probabilities. The acoustic feature vector contains the posterior probabilities of all the phones over a time window centered at the current time point. Hence this representation characterizes the phone recognition output including the confusion patterns caused by its limited accuracy. A 3D face model with varying texture is computed by analyzing a video recording of the speaker using a 3D morphable model. Training a neural network using 30 000 data vectors from an audiovisual recording in Dutch resulted in a very good simulation of the face on independent data sets of the same or of a different speaker.

  • 3. Ambrazaitis, G.
    et al.
    House, David
    KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.
    Multimodal prominences: Exploring the patterning and usage of focal pitch accents, head beats and eyebrow beats in Swedish television news readings2017In: Speech Communication, ISSN 0167-6393, E-ISSN 1872-7182, Vol. 95, p. 100-113Article in journal (Refereed)
    Abstract [en]

    Facial beat gestures align with pitch accents in speech, functioning as visual prominence markers. However, it is not yet well understood whether and how gestures and pitch accents might be combined to create different types of multimodal prominence, and how specifically visual prominence cues are used in spoken communication. In this study, we explore the use and possible interaction of eyebrow (EB) and head (HB) beats with so-called focal pitch accents (FA) in a corpus of 31 brief news readings from Swedish television (four news anchors, 986 words in total), focusing on effects of position in text, information structure as well as speaker expressivity. Results reveal an inventory of four primary (combinations of) prominence markers in the corpus: FA+HB+EB, FA+HB, FA only (i.e., no gesture), and HB only, implying that eyebrow beats tend to occur only in combination with the other two markers. In addition, head beats occur significantly more frequently in the second than in the first part of a news reading. A functional analysis of the data suggests that the distribution of head beats might to some degree be governed by information structure, as the text-initial clause often defines a common ground or presents the theme of the news story. In the rheme part of the news story, FA, HB, and FA+HB are all common prominence markers. The choice between them is subject to variation which we suggest might represent a degree of freedom for the speaker to use the markers expressively. A second main observation concerns eyebrow beats, which seem to be used mainly as a kind of intensification marker for highlighting not only contrast, but also value, magnitude, or emotionally loaded words; it is applicable in any position in a text. We thus observe largely different patterns of occurrence and usage of head beats on the one hand and eyebrow beats on the other, suggesting that the two represent two separate modalities of visual prominence cuing.

  • 4.
    Ananthakrishnan, Gopal
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Engwall, Olov
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Mapping between acoustic and articulatory gestures2011In: Speech Communication, ISSN 0167-6393, E-ISSN 1872-7182, Vol. 53, no 4, p. 567-589Article in journal (Refereed)
    Abstract [en]

    This paper proposes a definition for articulatory as well as acoustic gestures along with a method to segment the measured articulatory trajectories and acoustic waveforms into gestures. Using a simultaneously recorded acoustic-articulatory database, the gestures are detected based on finding critical points in the utterance, both in the acoustic and articulatory representations. The acoustic gestures are parameterized using 2-D cepstral coefficients. The articulatory trajectories arc essentially the horizontal and vertical movements of Electromagnetic Articulography (EMA) coils placed on the tongue, jaw and lips along the midsagittal plane. The articulatory movements are parameterized using 2D-DCT using the same transformation that is applied on the acoustics. The relationship between the detected acoustic and articulatory gestures in terms of the timing as well as the shape is studied. In order to study this relationship further, acoustic-to-articulatory inversion is performed using GMM-based regression. The accuracy of predicting the articulatory trajectories from the acoustic waveforms are at par with state-of-the-art frame-based methods with dynamical constraints (with an average error of 1.45-1.55 mm for the two speakers in the database). In order to evaluate the acoustic-to-articulatory inversion in a more intuitive manner, a method based on the error in estimated critical points is suggested. Using this method, it was noted that the estimated articulatory trajectories using the acoustic-to-articulatory inversion methods were still not accurate enough to be within the perceptual tolerance of audio-visual asynchrony.

  • 5.
    Ananthakrishnan, Gopal
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Neiberg, Daniel
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Engwall, Olov
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    In search of Non-uniqueness in the Acoustic-to-Articulatory Mapping2009In: INTERSPEECH 2009: 10TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2009, BAIXAS: ISCA-INST SPEECH COMMUNICATION ASSOC , 2009, p. 2799-2802Conference paper (Refereed)
    Abstract [en]

    This paper explores the possibility and extent of non-uniqueness in the acoustic-to-articulatory inversion of speech, from a statistical point of view. It proposes a technique to estimate the non-uniqueness, based on finding peaks in the conditional probability function of the articulatory space. The paper corroborates the existence of non-uniqueness in a statistical sense, especially in stop consonants, nasals and fricatives. The relationship between the importance of the articulator position and non-uniqueness at each instance is also explored.

  • 6. Arzyutov, Dmitry
    et al.
    Lyublinskaya, Marina
    Nenet͡skoe olenevodstvo: geografii͡a, ėtnografii͡a, lingvistika [Nenets Reindeer Husbandry: Geography, Ethnography, and Linguistics].2018Collection (editor) (Refereed)
  • 7.
    Bergren, Max
    et al.
    Gavagai.
    Karlgren, Jussi
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Östling, Robert
    Stockholms universitet.
    Parkvall, Mikael
    Stockholms universitet.
    Inferring the location of authors from words in their texts2015In: Proceedings of the 20th Nordic Conference of Computational Linguistics, Linköping University Electronic Press, 2015Conference paper (Refereed)
    Abstract [en]

    For the purposes of computational dialec- tology or other geographically bound text analysis tasks, texts must be annotated with their or their authors’ location. Many texts are locatable but most have no ex- plicit annotation of place. This paper describes a series of experiments to de- termine how positionally annotated mi- croblog posts can be used to learn loca- tion indicating words which then can be used to locate blog texts and their authors. A Gaussian distribution is used to model the locational qualities of words. We in- troduce the notion of placeness to describe how locational words are.

    We find that modelling word distributions to account for several locations and thus several Gaussian distributions per word, defining a filter which picks out words with high placeness based on their local distributional context, and aggregating lo- cational information in a centroid for each text gives the most useful results. The re- sults are applied to data in the Swedish language. 

  • 8.
    Beskow, Jonas
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Bruce, Gösta
    Lunds universitet.
    Enflo, Laura
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Granström, Björn
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Schötz, Susanne
    Lunds universitet.
    Recognizing and Modelling Regional Varieties of Swedish2008In: INTERSPEECH 2008: 9TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2008, 2008, p. 512-515Conference paper (Refereed)
    Abstract [en]

    Our recent work within the research project SIMULEKT (Simulating Intonational Varieties of Swedish) includes two approaches. The first involves a pilot perception test, used for detecting tendencies in human clustering of Swedish dialects. 30 Swedish listeners were asked to identify the geographical origin of Swedish native speakers by clicking on a map of Sweden. Results indicate for example that listeners from the south of Sweden are better at recognizing some major Swedish dialects than listeners from the central part of Sweden, which includes the capital area. The second approach concerns a method for modelling intonation using the newly developed SWING (Swedish INtonation Generator) tool, where annotated speech samples are resynthesized with rule based intonation and audiovisually analysed with regards to the major intonational varieties of Swedish. We consider both approaches important in our aim to test and further develop the Swedish prosody model.

  • 9.
    Beskow, Jonas
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Edlund, Jens
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Nordstrand, Magnus
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT.
    A Model for Multimodal Dialogue System Output Applied to an Animated Talking Head2005In: SPOKEN MULTIMODAL HUMAN-COMPUTER DIALOGUE IN MOBILE ENVIRONMENTS / [ed] Minker, Wolfgang; Bühler, Dirk; Dybkjær, Laila, Dordrecht: Springer , 2005, p. 93-113Chapter in book (Refereed)
    Abstract [en]

    We present a formalism for specifying verbal and non-verbal output from a multimodal dialogue system. The output specification is XML-based and provides information about communicative functions of the output, without detailing the realisation of these functions. The aim is to let dialogue systems generate the same output for a wide variety of output devices and modalities. The formalism was developed and implemented in the multimodal spoken dialogue system AdApt. We also describe how facial gestures in the 3D-animated talking head used within this system are controlled through the formalism.

  • 10.
    Beskow, Jonas
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Granström, Björn
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    House, David
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Visual correlates to prominence in several expressive modes2006In: INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, BAIXAS: ISCA-INST SPEECH COMMUNICATION ASSOC , 2006, p. 1272-1275Conference paper (Refereed)
    Abstract [en]

    In this paper, we present measurements of visual, facial parameters obtained from a speech corpus consisting of short, read utterances in which focal accent was systematically varied. The utterances were recorded in a variety of expressive modes including certain, confirming, questioning, uncertain, happy, angry and neutral. Results showed that in all expressive modes, words with focal accent are accompanied by a greater variation of the facial parameters than are words in non-focal positions. Moreover, interesting differences between the expressions in terms of different parameters were found.

  • 11.
    Bigert, Johnny
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Kann, Viggo
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Knutsson, Ola
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Sjöbergh, Jonas
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Grammar checking for Swedish second language learners2004In: CALL for the Nordic Languages: Tools and Methods for Computer Assisted Language Learning, Copenhagen Business School: Samfundslitteratur , 2004, p. 33-47Chapter in book (Other academic)
    Abstract [en]

    Grammar errors and context-sensitive spelling errors in texts written by second language learners are hard to detect automatically. We have used three different approaches for grammar checking: manually constructed error detection rules, statistical differences between correct and incorrect texts, and machine learning of specific error types. The three approaches have been evaluated using a corpus of second language learner Swedish. We found that the three methods detect different errors and therefore complement each other.

  • 12.
    Bjorkman, Beyza
    KTH, School of Education and Communication in Engineering Science (ECE), Department for Library services, Language and ARC, Language and communication.
    Pragmatic strategies in English as an academic lingua franca: Ways of achieving communicative effectiveness?2011In: Journal of Pragmatics, ISSN 0378-2166, E-ISSN 1879-1387, Vol. 43, no 4, p. 950-964Article in journal (Refereed)
    Abstract [en]

    This paper will report the findings of a study that has investigated spoken English as a lingua franca (ELF) usage in Swedish higher education. The material comprises digital recordings of lectures and student group-work sessions, all being naturally occurring, authentic high-stakes spoken exchange, i.e. from non-language-teaching contexts. The aim of the present paper, which constitutes a part of a larger study, has been to investigate the role pragmatic strategies play in the communicative effectiveness of English as a lingua franca. The paper will document types of pragmatic strategies as well as point to important differences between the two speech event types and the implications of these differences for English-medium education. The findings show that lecturers in ELF settings make less frequent use of pragmatic strategies than students who deploy these strategies frequently in group-work sessions. Earlier stages of the present study (Bjorkman, 2008a, 2008b, 2009) showed that despite frequent non-standardness in the morphosyntax level, there is little overt disturbance in student group-work, and it is highly likely that a variety of pragmatic strategies that students deploy prevents some disturbance. It is reasonable to assume that, in the absence of appropriate pragmatic strategies used often in lectures, there is an increased risk for covert disturbance.

  • 13.
    Björkman, Beyza
    KTH, School of Education and Communication in Engineering Science (ECE), Department for Library services, Language and ARC, Language and communication.
    An analysis of polyadic English as a lingua franca (ELF) speech: A communicative strategies framework2014In: Journal of Pragmatics, ISSN 0378-2166, E-ISSN 1879-1387, Vol. 66, p. 122-138Article in journal (Refereed)
    Abstract [en]

    This paper reports on an analysis of the communicative strategies (CSs) used by speakers in spoken lingua franca English (ELF) in an academic setting. The purpose of the work has primarily been to outline the CSs used in polyadic ELF speech which are used to ensure communication effectiveness in consequential situations and to present a framework that shows the different communicative functions of a number of CSs. The data comprise fifteen group sessions of naturally occurring student group-work talk in content courses at a technical university. Detailed qualitative analyses have been carried out, resulting in a framework of the communication strategies used by the speakers. The methodology here provides us with a taxonomy of CSs in natural ELF interactions. The results show that other than explicitness strategies, comprehension checks, confirmation checks and clarification requests were frequently employed CSs in the data. There were very few instances of self and other-initiated word replacement, most likely owing to the nature of the high-stakes interactions where the focus is on the task and not the language. The results overall also show that the speakers in these ELF interactions employed other-initiated strategies as frequently as self-initiated communicative strategies.

  • 14.
    Björkman, Beyza
    KTH, School of Education and Communication in Engineering Science (ECE), Department for Library services, Language and ARC, Language and communication.
    The pragmatics of English as a lingua franca in the international university: Introduction2011In: Journal of Pragmatics, ISSN 0378-2166, E-ISSN 1879-1387, Vol. 43, no 4, p. 923-925Article in journal (Other academic)
  • 15.
    Blomberg, Mats
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Elenius, Daniel
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Tree-Based Estimation of Speaker Characteristics for Speech Recognition2009In: INTERSPEECH 2009: 10TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2009, BAIXAS: ISCA-INST SPEECH COMMUNICATION ASSOC , 2009, p. 580-583Conference paper (Refereed)
    Abstract [en]

    Speaker adaptation by means of adjustment of speaker characteristic properties, such as vocal tract length, has the important advantage compared to conventional adaptation techniques that the adapted models are guaranteed to be realistic if the description of the properties are. One problem with this approach is that the search procedure to estimate them is computationally heavy. We address the problem by using a multi-dimensional, hierarchical tree of acoustic model sets. The leaf sets are created by transforming a conventionally trained model set using leaf-specific speaker profile vectors. The model sets of non-leaf nodes are formed by merging the models of their child nodes, using a computationally efficient algorithm. During recognition, a maximum likelihood criterion is followed to traverse the tree. Studies of one- (VTLN) and four-dimensional speaker profile vectors (VTLN, two spectral slope parameters and model variance scaling) exhibit a reduction of the computational load to a fraction compared to that of an exhaustive grid search. In recognition experiments on children's connected digits using adult and male models, the one-dimensional tree search performed as well as the exhaustive search. Further reduction was achieved with four dimensions. The best recognition results are 0.93% and 10.2% WER in TIDIGITS and PF-Star-Sw, respectively, using adult models.

  • 16.
    Boholm, Max
    KTH, School of Architecture and the Built Environment (ABE), Philosophy and History of Technology, Philosophy.
    Risk, language and discourse2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This doctoral thesis analyses the concept of risk and how it functions as an organizing principle of discourse, paying close attention to actual linguistic practice.

              Article 1 analyses the concepts of risk, safety and security and their relations based on corpus data (the Corpus of Contemporary American English). Lexical, grammatical and semantic contexts of the nouns risk, safety and security, and the adjectives risky, safe and secure are analysed and compared. Similarities and differences are observed, suggesting partial synonymy between safety (safe) and security (secure) and semantic opposition to risk (risky). The findings both support and contrast theoretical assumptions about these concepts in the literature.

              Article 2 analyses the concepts of risk and danger and their relation based on corpus data (in this case the British National Corpus). Frame semantics is used to explore the assumptions of the sociologist Niklas Luhmann (and others) that the risk concept presupposes decision-making, while the concept of danger does not. Findings partly support and partly contradict this assumption.

              Article 3 analyses how newspapers represent risk and causality. Two theories are used: media framing and the philosopher John Mackie’s account of causality. A central finding of the study is that risks are “framed” with respect to causality in several ways (e.g. one and the same type of risk can be presented as resulting from various causes). Furthermore, newspaper reporting on risk and causality vary in complexity. In some articles, risks are presented without causal explanations, while in other articles, risks are presented as results from complex causal conditions. Considering newspaper reporting on an aggregated overall level, complex schemas of causal explanations emerge.

              Article 4 analyses how phenomena referred to by the term nano (e.g. nanotechnology, nanoparticles and nanorobots) are represented as risks in Swedish newspaper reporting. Theoretically, the relational theory of risk and frame semantics are used. Five main groups of nano-risks are identified based on the risk object of the article: (I) nanotechnology; (II) nanotechnology and its artefacts (e.g. nanoparticles and nanomaterials); (III) nanoparticles, without referring to nanotechnology; (IV) non-nanotechnological nanoparticles (e.g. arising from traffic); and (V) nanotechnology and nanorobots. Various patterns are explored within each group, concerning, for example, what is considered to be at stake in relation to these risk objects, and under what conditions. It is concluded that Swedish patterns of newspaper reporting on nano-risks follow international trends, influenced by scientific assessment, as well as science fiction.

              Article 5 analyses the construction and negotiation of risk in the Swedish controversy over the use of antibacterial silver in health care and consumer products (e.g. sports clothes and equipment). The controversy involves several actors: print and television news media, Government and parliament, governmental agencies, municipalities, non-government organisations, and companies. In the controversy, antibacterial silver is claimed to be a risk object that negatively affects health, the environment, and sewage treatment industry (objects at risk). In contrast, such claims are denied. Antibacterial silver is even associated with the benefit of mitigating risk objects (e.g. bacteria and micro-organisms) that threaten health and the environment (objects at risk). In other words, both sides of the controversy invoke health and the environment as objects at risk. Three strategies organising risk communication are identified: (i) representation of silver as a risk to health and the environment; (ii) denial of such representations; and (iii) benefit association, where silver is construed to mitigate risks to health and the environment.

  • 17.
    Boholm, Max
    School of Global Studies, University of Gothenburg, Gothenburg, Sweden..
    The semantic distinction between ‘risk’ and ‘danger’: A linguistic analysis2012In: Risk Analysis, ISSN 0272-4332, E-ISSN 1539-6924, Vol. 32, no 2, p. 281-293Article in journal (Refereed)
    Abstract [en]

    The analysis combines frame semantic and corpus linguistic approaches in analyzing the role of agency and decision making in the semantics of the words “risk” and “danger” (both nominal and verbal uses). In frame semantics, the meanings of “risk” and of related words, such as “danger,” are analyzed against the background of a specific cognitive-semantic structure (a frame) comprising frame elements such as Protagonist, Bad Outcome, Decision, Possession, and Source. Empirical data derive from the British National Corpus (100 million words). Results indicate both similarities and differences in use. First, both “risk” and “danger” are commonly used to represent situations having potential negative consequences as the result of agency. Second, “risk” and “danger,” especially their verbal uses (to risk, to endanger), differ in agent-victim structure, i.e., “risk” is used to express that a person affected by an ac- tion is also the agent of the action, while “endanger” is used to express that the one affected is not the agent. Third, “risk,” but not “danger,” tends to be used to represent rational and goal-directed action. The results therefore to some extent confirm the analysis of “risk” and “danger” suggested by German sociologist Niklas Luhmann. As a point of discussion, the present findings arguably have implications for risk communication.

  • 18. Borg, Erik
    et al.
    Edquist, Gertrud
    Reinholdson, Anna-Clara
    Risberg, Arne
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    McAllister, Bob
    Speech and language development in a population of Swedish hearing-impaired pre-school-children, a cross-sectional study2007In: International Journal of Pediatric Otorhinolaryngology, ISSN 0165-5876, E-ISSN 1872-8464, Vol. 71, no 7, p. 1061-1077Article in journal (Refereed)
    Abstract [en]

    Objective: There is little information on speech and language development in preschool children with mild, moderate or severe hearing impairment. The primary aim of the study is to establish a reference material for clinical use covering various aspects of speech and language functions and to relate test values to pure tone audiograms and parents' judgement of their children's hearing and language abilities. Methods: Nine speech and language tests were applied or modified, both classical tests and newly developed tests. Ninety-seven children with normal hearing and 156 with hearing impairment were tested. Hearing was 80 dB HL PTA or better in the best ear. Swedish was their strongest language. None had any additional diagnosed major handicaps. The children were 4-6 years of age. The material was divided into 10 categories of hearing impairment, 5 conductive and 5 sensorineural: unilateral; bilateral 0-20; 21-40; 41-60; 61-80 dB HL PTA. The tests, selected on the basis of a three component language model, are phoneme discrimination; rhyme matching; Peabody Picture Vocabulary Test (PPVT-III, word perception); Test for Reception of Grammar (TROG, grammar perception); prosodic phrase focus; rhyme construction; Word Finding Vocabulary Test (word production); Action Picture Test (grammar production); oral motor test. Results: Only categories with sensorineural toss showed significant differences from normal. Word production showed the most marked delay for 21-40 dB HL: 5 and 6 years p < 0.01; for 41-60 dB: 4 years p < 0.01 and 6 years p < 0.01 and 61-80 dB: 5 years p < 0.05. Phoneme discrimination 21-40 dB HL: 6 years p < 0.05; 41-60 dB: 4 years p < 0.01; 61-80 dB: 4 years p < 0.001, 5 years p < 0.001. Rhyme matching: no significant difference as compared to normal data. Word perception: sensorineural 41-60 dB HL: 6 years p < 0.05; 61-80 dB: 4 years p < 0.05; 5 years p < 0.01. Grammar perception: sensorineural 41-60 dB HL: 6 years p < 0.05; 61-80 dB: 5 years p < 0.05. Prosodic phrase focus: 41-60 dB HL: 5 years p < 0.01. Rhyme construction: 41-60 dB HL: 4 years p < 0.05. Grammar production: 61-80 dB HL: 5 years p < 0.01. Oral motor function: no differences. The Word production test showed a 1.5-2 years delay for sensorineural impairment 41-80 dB HL through 4-6 years of age. There were no differences between hearing-impaired boys and girls. Extended data for the screening test [E. Borg, A. Risberg, B. McAllister, B.M. Undemar, G. Edquist, A.C. Reinholdsson, et at., Language development in hearing-impaired children. Establishment of a reference material for a "Language test for hearing-impaired children", Int. J. Pediatr. Otorhinolaryngot. 65 (2002) 15-26] are presented. Conclusions: Reference values for expected speech and language development are presented that cover nearly 60% of the studied population. The effect of the peripheral hearing impairment is compensated for in many children with hearing impairment up to 60 dB HL. Above that degree of impairment, language delay is more pronounced, probably due to a toss of acuity. The importance of central cognitive functions, speech reading and signing for compensation of peripheral limitations is pointed out.

  • 19. Bruce, G.
    et al.
    Schötz, S.
    Granström, Björn
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Enflo, Laura
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Modelling intonation in varieties of swedish2008In: Proceedings of the 4th International Conference on Speech Prosody, SP 2008, International Speech Communications Association , 2008, p. 571-574Conference paper (Refereed)
    Abstract [en]

    The research project Simulating intonational varieties of Swedish (SIMULEKT) aims to gain more precise and thorough knowledge about some major regional varieties of Swedish: South, Göta, Svea, Gotland, Dala, North, and Finland Swedish. In this research effort, the Swedish prosody model and different forms of speech synthesis play a prominent role. The two speech databases SweDia 2000 and SpeechDat constitute our main material for analysis. As a first test case for our prosody model, we compared Svea and North Swedish intonation in a pilot production-oriented perception test. Näi{dotless}ve Swedish listeners were asked to identify the most Svea and North sounding stimuli. Results showed that listeners can differentiate between the two varieties from intonation only. They also provided information on how intonational parameters affect listeners' impression of Swedish varieties. All this indicates that our experimental method can be used to test perception of different regional varieties of Swedish.

  • 20.
    Carlson, Rolf
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Granstrom, Bjorn
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Johan Liljencrants (1936-2012) in memoriam2012In: Journal of the International Phonetic Association, ISSN 0025-1003, E-ISSN 1475-3502, Vol. 42, no 2, p. 253-254Article in journal (Refereed)
  • 21.
    Carlson, Rolf
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Gustafson, Kjell
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Strangert, Eva
    Cues for Hesitation in Speech Synthesis2006In: INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, BAIXAS: ISCA-INST SPEECH COMMUNICATION ASSOC , 2006, p. 1300-1303Conference paper (Refereed)
    Abstract [en]

    The current study investigates acoustic correlates to perceived hesitation based on previous work showing that pause duration and final lengthening both contribute to the perception of hesitation. It is the total duration increase that is the valid cue rather than the contribution by either factor. The present experiment using speech synthesis was designed to evaluate F0 slope and presence vs. absence of creaky voice before the inserted hesitation in addition to durational cues. The manipulations occurred in two syntactic positions, within a phrase and between two phrases, respectively. The results showed that in addition to durational increase, variation of both F0 slope and creaky voice had perceptual effects, although to a much lesser degree. The results have a bearing on efforts to model spontaneous speech including disfluencies, to be explored, for example, in spoken dialogue systems.

  • 22.
    Carlson, Rolf
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT.
    Hirschberg, Julia
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT.
    Cross-Cultural Perception of Discourse Phenomena2009In: INTERSPEECH 2009: 10TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2009, BAIXAS: ISCA-INST SPEECH COMMUNICATION ASSOC , 2009, p. 1723-1726Conference paper (Refereed)
    Abstract [en]

    We discuss perception studies of two low level indicators of discourse phenomena by Swedish. Japanese, and Chinese native speakers. Subjects were asked to identify upcoming prosodic boundaries and disfluencies in Swedish spontaneous speech. We hypothesize that speakers of prosodically unrelated languages should be less able to predict upcoming phrase boundaries but potentially better able to identify disfluencies, since indicators of disfluency are more likely to depend upon lexical, as well as acoustic information. However, surprisingly, we found that both phenomena were fairly well recognized by native and non-native speakers, with, however, some possible interference from word tones for the Chinese subjects.

  • 23. Caruth, Cathy
    Lewis, Arthur (Translator)
    Lögn och historia2008Other (Refereed)
  • 24.
    Dahlberg, Leif
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    All Aboard the Louis Vuitton Train!2016In: Pólemos, ISSN 2036-4601, Vol. 10, no 1, p. 179-195Article in journal (Refereed)
    Abstract [en]

    The article discusses fashion advertising as a means to access and understand contemporary social imaginary significations of the body politic, focusing on an advertising for Louis Vuitton. The article suggest that one can read advertising as a form of continuous, running commentary that society makes of itself, and through which one can unearth the social imaginary. The article finds a plethora of meanings in the selected advertising for Louis Vuitton, but the central finding is that the fashion advertising represents community as an absence of community; in other words as a deficit that the brand somehow is able to rectify.

  • 25.
    Dahlberg, Leif
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Det akademiska samtalet2015In: Universitetet som medium / [ed] Matts Lindström & Adam Wickberg Månsson, Lund: Mediehistoria, Lunds universitet , 2015, p. 195-223Chapter in book (Refereed)
  • 26.
    Dahlberg, Leif
    KTH, School of Computer Science and Communication (CSC), Media Technology and Graphic Arts, Media (closed 20111231).
    Fiktion och autofiktion i Marguerite Duras berättelse ’La douleur’2008In: Tolkningens scen: Festskrift till Roland Lysell, Stockholm: Aiolos , 2008, p. 190-207Chapter in book (Refereed)
  • 27.
    Dahlberg, Leif
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Melancholic Face-Off: Caryl Phillips’ Elegy over David Oluwale2016In: Diaspora, Law and Literature / [ed] Daniela Carpi & Klaus Stierstorfer, Berlin: De Gruyter , 2016, p. 327-347Chapter in book (Refereed)
  • 28.
    Dahlberg, Leif
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Recension av Serge Doubrovsky, Jacques Lecarme & Philippe Lejeune (red.), Autofictions & Cie, RITM 6 (Paris, Université de Paris X, 1993)1994In: Tidskrift för litteraturvetenskap, ISSN 1104-0556, E-ISSN 2001-094X, no 3-4, p. 146-148Article, book review (Refereed)
  • 29.
    Dahlberg, Leif
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Spacing Law and Politics: The Constitution and Representation of the Juridical2016 (ed. 1)Book (Refereed)
    Abstract [en]

    Examining the inherent spatiality of law, both theoretically and as social practice, this book presents a genealogical account of the emergence and the development of the juridical. In an analysis that stretches from ancient Greece, through late antiquity and early modern and modern Europe, and on to the contemporary courtroom, it considers legal and philosophical texts, artistic and literary works, as well as judicial practices, in order to elicit and document a series of critical moments in the history of juridical space. Offering a more nuanced understanding of law than that found in traditional philosophical, political or social accounts of legal history, Dahlberg forges a critical account of the intimate relations between law and politics that shows how juridical space is determined and conditioned in ways that are integral to the very functioning – and malfunctioning – of law.

  • 30.
    Dahlberg, Leif
    Stockholms universitet.
    Tre romantiska berättelser: Studier i Eyvind Johnsons Romantisk berättelse och Tidens gång, Lars Gustafssons Poeten Brumbergs sista dagar och död och Sven Delblancs Kastrater1999Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The dissertation explores the notion 'romantic story' as a literary device in three Swedish late modernist novels: Eyvind Johnson's Romantisk berättelse (1953) (Romantic Story) and Tidens gång. En romantisk berättelse (1955) (The Passage of Time. A Romantic Story), Lars Gustafsson's Poeten Brumbergs sista dagar och död. En romantisk berättelse (1959) (The Last Days and Death of the Poet Brumberg. A Romantic Story), and Sven Delblanc's Kastrater. En romantisk berättelse (1975) (Castrati. A Romantic Story). The study consists of three parts, one on each novel.

    In part, the dissertation turns around the word 'romantic' and the historical period designated by this name; and addresses the topic of what the term has come to mean in a period of late modernist aesthetics. In part, it is an investigation of the three novels as 'romantic stories', i.e. a study of the generic properties of the novels, and also of the novel as a literary genre. By the very nature of the study the approach is ecclectic, encompassing interests in paratextuality (chapters 2, 5, 10), narratology (chapter 3), autobiography (chapter 4), hypertextuality and intertextuality (chapters 6, 7, 8, 9, 11, 12), literary form (chapter 9), literary genres (chapter 10), as well as the relation between the literary work and the historical and cultural context that it is possible to construct around it (chapters 4, 6, 12, 13). However, the focus of the study always remains on the texts and how they work, making use of close reading as a strategic means of teasing out meaning and reference.

    The dissertation concludes with a chapter in which the three novels are looked at as group and from the point of view of the internal histor(icit)y of Modernism.

  • 31.
    Dahlberg, Leif
    KTH, School of Computer Science and Communication (CSC).
    Unforgotten, Unforgiven. On the Representation of Pain and Suffering in Marguerite Duras’ La Douleur2008In: Polemos. Diritto e Cultura, Vol. 2, no 2, p. 101-114Article in journal (Refereed)
  • 32.
    Dahlberg, Leif
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Graphic Arts, Media (closed 20111231).
    Snickars, PelleKungliga biblioteket.
    Berättande i olika medier2008Collection (editor) (Refereed)
  • 33. de Leeuw, Esther
    et al.
    Opitz, Conny
    Lubinska, Dorota
    KTH, School of Education and Communication in Engineering Science (ECE), Department for Library services, Language and ARC, Language and communication.
    Dynamics of first language attrition across the lifespan Introduction2013In: International Journal of Bilingualism, ISSN 1367-0069, E-ISSN 1756-6878, Vol. 17, no 6, p. 667-674Article in journal (Refereed)
  • 34.
    Edlund, Jens
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Beskow, Jonas
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Pushy versus meek: using avatars to influence turn-taking behaviour2007In: INTERSPEECH 2007: 8TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION, BAIXAS: ISCA-INST SPEECH COMMUNICATION ASSOC , 2007, p. 2784-2787Conference paper (Refereed)
    Abstract [en]

    The flow of spoken interaction between human interlocutors is a widely studied topic. Amongst other things, studies have shown that we use a number of facial gestures to improve this flow - for example to control the taking of turns. This type of gestures ought to be useful in systems where an animated talking head is used, be they systems for computer mediated human-human dialogue or spoken dialogue systems, where the computer itself uses speech to interact with users. In this article, we show that a small set of simple interaction control gestures and a simple model of interaction can be used to influence users' behaviour in an unobtrusive manner. The results imply that such a model may improve the flow of computer mediated interaction between humans under adverse circumstances, such as network latency, or to create more human-like spoken human-computer interaction.

  • 35.
    Edlund, Jens
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Gustafson, Joakim
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Hidden resources - Strategies to acquire and exploit potential spoken language resources in national archives2016In: Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC 2016, European Language Resources Association (ELRA) , 2016, p. 4531-4534Conference paper (Refereed)
    Abstract [en]

    In 2014, the Swedish government tasked a Swedish agency, The Swedish Post and Telecom Authority (PTS), with investigating how to best create and populate an infrastructure for spoken language resources (Ref N2014/2840/ITP). As a part of this work, the department of Speech, Music and Hearing at KTH Royal Institute of Technology have taken inventory of existing potential spoken language resources, mainly in Swedish national archives and other governmental or public institutions. In this position paper, key priorities, perspectives, and strategies that may be of general, rather than Swedish, interest are presented. We discuss broad types of potential spoken language resources available; to what extent these resources are free to use; and thirdly the main contribution: strategies to ensure the continuous acquisition of spoken language resources in a manner that facilitates speech and speech technology research.

  • 36.
    Edlund, Jens
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Heldner, Manias
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Hirschberg, Julia
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Pause and gap length in face-to-face interaction2009In: INTERSPEECH 2009: 10TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2009, BAIXAS: ISCA-INST SPEECH COMMUNICATION ASSOC , 2009, p. 2779-2782Conference paper (Refereed)
    Abstract [en]

    It has long been noted that conversational partners tend to exhibit increasingly similar pitch, intensity, and timing behavior over the course of a conversation. However, the metrics developed to measure this similarity to date have generally failed to capture the dynamic temporal aspects of this process. In this paper, we propose new approaches to measuring interlocutor similarity in spoken dialogue. We define similarity in terms of convergence and synchrony and propose approaches to capture these, illustrating our techniques on gap and pause production in Swedish spontaneous dialogues.

  • 37.
    Edlund, Jens
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Heldner, Mattias
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    vertical bar nailon vertical bar: Software for Online Analysis of Prosody2006Conference paper (Refereed)
    Abstract [en]

    This paper presents /nailon/ - a software package for online real-time prosodic analysis that captures a number of prosodic features relevant for inter-action control in spoken dialogue systems. The current implementation captures silence durations; voicing, intensity, and pitch; pseudo-syllable durations; and intonation patterns. The paper provides detailed information on how this is achieved. As an example application of /nailon/, we demonstrate how it is used to improve the efficiency of identifying relevant places at which a machine can legitimately begin to talk to a human interlocutor, as well as to shorten system response times.

  • 38.
    Edlund, Jens
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Heldner, Mattias
    Wlodarczak, Marcin
    Catching wind of multiparty conversation2014In: LREC 2014 - NINTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2014Conference paper (Refereed)
    Abstract [en]

    The paper describes the design of a novel corpus of respiratory activity in spontaneous multiparty face-to-face conversations in Swedish. The corpus is collected with the primary goal of investigating the role of breathing for interactive control of interaction. Physiological correlates of breathing are captured by means of respiratory belts, which measure changes in cross sectional area of the rib cage and the abdomen. Additionally, auditory and visual cues of breathing are recorded in parallel to the actual conversations. The corpus allows studying respiratory mechanisms underlying organisation of spontaneous communication, especially in connection with turn management. As such, it is a valuable resource both for fundamental research and speech techonology applications.

  • 39.
    Engwall, Olov
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Can audio-visual instructions help learners improve their articulation?: an ultrasound study of short term changes2008In: INTERSPEECH 2008: 9TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2008, BAIXAS: ISCA-INST SPEECH COMMUNICATION ASSOC , 2008, p. 2631-2634Conference paper (Refereed)
    Abstract [en]

    This paper describes how seven French subjects change their pronunciation and articulation when practising Swedish words with a computer-animated virtual teacher. The teacher gives feedback on the user's pronunciation with audiovisual instructions suggesting how the articulation should be changed. A wizard-of-Oz set-up was used for the training session, in which a human listener choose the adequate pre-generated feedback based on the user's pronunciation. The subjects change of the articulation was monitored during the practice session with a hand-held ultrasound probe. The perceptual analysis indicates that the subjects improved their pronunciation during the training and the ultrasound measurements suggest that the improvement was made by following the articulatory instructions given by the computer-animated teacher.

  • 40.
    Engwall, Olov
    KTH, Superseded Departments, Speech, Music and Hearing.
    Dynamical Aspects of Coarticulation in Swedish Fricatives: A Combined EMA and EPG Study2000In: TMH Quarterly Status and Progress Report, p. 49-73Article in journal (Other academic)
    Abstract [en]

    An electromagnetic articulography (EMA) system and electropalatography (EPG)have been employed to study five Swedish fricatives in different vowel contexts.Articulatory measures at the onset of, the mean value during, and at the offset ofthe fricative were used to evidence the coarticulation throughout the fricative. Thecontextual influence on these three different measurements of the fricative arecompared and contrasted to evidence how the coarticulation changes. Measureswere made for the jaw motion, lip protrusion, tongue body with EMA and linguopalatalcontact with EPG. The data from the two sources were further combinedand assessed for complementary and conflicting results.

  • 41.
    Engwall, Olov
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Wik, Preben
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Are real tongue movements easier to speech read than synthesized?2009In: INTERSPEECH 2009: 10TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2009, BAIXAS: ISCA-INST SPEECH COMMUNICATION ASSOC , 2009, p. 824-827Conference paper (Refereed)
    Abstract [en]

    Speech perception studies with augmented reality displays in talking heads have shown that tongue reading abilities are weak initially, but that subjects become able to extract some information from intra-oral visualizations after a short training session. In this study, we investigate how the nature of the tongue movements influences the results, by comparing synthetic rule-based and actual, measured movements. The subjects were significantly better at perceiving sentences accompanied by real movements, indicating that the current coarticulation model developed for facial movements is not optimal for the tongue.

  • 42.
    Ericsdotter, Christine
    et al.
    Stockholm University.
    Ternström, Sten
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Swedish2012In: The Use of the International Phonetic Alphabet in the Choral Rehearsal / [ed] Duane R. Karna, Scarecrow Press, 2012, p. 245-251Chapter in book (Other academic)
  • 43.
    Fant, Gunnar
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    A personal note from Gunnar Fant2009In: Speech Communication, ISSN 0167-6393, E-ISSN 1872-7182, Vol. 51, no 7, p. 564-568Article in journal (Other academic)
  • 44.
    Frichot, Hélène
    KTH, School of Architecture and the Built Environment (ABE).
    On the Relentless Logic of the Logo-Sign2003In: M/C: A Journal of Media and Culture, ISSN 1441-2616, Vol. 6, no 3Article in journal (Refereed)
  • 45. Gerholm, Tove
    et al.
    Gustavsson, L.
    Schwarz, I.
    Marklund, U.
    Salomão, Gláucia Laís
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH. Institutionen för Lingvistiken, Stockholm Universitet.
    Kallioinen, P.
    Andersson, S.
    Eriksson, F.
    Pagmar, D.
    Tahbaz, S.
    The Swedish MINT Project modelling infant language acquisition2015Conference paper (Refereed)
  • 46. Giaimo, Cara
    The Surprisingly Sticky Tale of the Hadza and the Honeyguide Bird2016Other (Other (popular science, discussion, etc.))
  • 47.
    Grancharov, Volodya
    et al.
    KTH, School of Electrical Engineering (EES), Sound and Image Processing.
    Zhao, David Yuheng
    KTH, School of Electrical Engineering (EES), Sound and Image Processing.
    Lindblom, Jonas
    KTH, School of Electrical Engineering (EES), Sound and Image Processing.
    Kleijn, W. Bastiaan
    KTH, School of Electrical Engineering (EES), Sound and Image Processing.
    Non-Intrusive Speech Quality Assessment with Low Computational Complexity2006In: INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, BAIXAS: ISCA-INST SPEECH COMMUNICATION ASSOC , 2006, p. 189-192Conference paper (Refereed)
    Abstract [en]

    We describe an algorithm for monitoring subjective speech quality without access to the original signal that has very low computational and memory requirements. The features used in the proposed algorithm can be computed from commonly used speech-coding parameters. Reconstruction and perceptual transformation of the signal are not performed. The algorithm generates quality assessment ratings without explicit distortion modeling. The simulation results indicate that the proposed non-intrusive objective quality measure performs better than the ITU-T P.563 standard despite its very low computational complexity.

  • 48.
    Grillner, Katja
    KTH, School of Architecture and the Built Environment (ABE), Architecture, Critical Studies in Architecture.
    Architecture and the dilemma of identity: a study of the ’weak subject’ in Marcel Proust1997In: The interpretation of culture and the culture of interpretation / editors: Eva Hemmungs Wirtén and Erik Peurell, Vol. S. [63]-86Article in journal (Refereed)
  • 49.
    Grillner, Katja
    KTH, School of Architecture and the Built Environment (ABE), Architecture, Critical Studies in Architecture.
    Four essays framed: (questions of imagination, interpretation and representation in architecture)1997Book (Other academic)
  • 50.
    Grillner, Katja
    KTH, School of Architecture and the Built Environment (ABE), Architecture, Critical Studies in Architecture.
    The primacy of perplexion: working architecture through a distracted order of experience : part I - fictional reality in search...1995In: Nordisk arkitekturforskning, Vol. 1995 (8:1), s. 55-67 : ill.Article in journal (Refereed)
1234 1 - 50 of 165
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf