<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns="http://purl.org/rss/1.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel rdf:about="https://ir.unisa.ac.za/handle/10500/23902">
<title>South African Computer Journal 2000(25)</title>
<link>https://ir.unisa.ac.za/handle/10500/23902</link>
<description/>
<items>
<rdf:Seq>
<rdf:li rdf:resource="https://ir.unisa.ac.za/handle/10500/24397"/>
<rdf:li rdf:resource="https://ir.unisa.ac.za/handle/10500/24396"/>
<rdf:li rdf:resource="https://ir.unisa.ac.za/handle/10500/24395"/>
<rdf:li rdf:resource="https://ir.unisa.ac.za/handle/10500/24394"/>
</rdf:Seq>
</items>
<dc:date>2026-05-12T21:28:01Z</dc:date>
</channel>
<item rdf:about="https://ir.unisa.ac.za/handle/10500/24397">
<title>Object oriented programs and a stack based virtual machine</title>
<link>https://ir.unisa.ac.za/handle/10500/24397</link>
<description>Object oriented programs and a stack based virtual machine
Waldron, JT
Dynamic quantitative measurements of Bytecode and Stack Frame Usage by Eiffel and Java Programs in the Java Virtual Machine are made. Two Eiffel programs are dynamically analysed while executing on the JVM, and the results compared with those from the Java Programs. The aim is to examine whether properties like instruction usage and stack frame size are properties of the Java programming language itself or are exhibited by Eiffel programs as well. Investigations analyse how the different assertion checking and optimizations possible using the SmallEiffel compiler affect bytecode and stack frame usage. Remarkably local_load, push_const and local_store instruction categories always account for very close to 40% of instructions executed, a property of the Java Virtual Machine for both the Java and Eiffel programming languages, irrespective of compiler or compiler optimizations used. Java programs executed 75% of their bytecodes within the API suggesting a way to improve the speed of Java programs would be to compile the API methods to native instructions and save these on disk in a standard format, cutting the time spent interpreting programs. Only 4.8% of instructions were in the API when Eiffel programs executed.
</description>
<dc:date>2000-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://ir.unisa.ac.za/handle/10500/24396">
<title>Syntactic description of neighbourhood in quadtree</title>
<link>https://ir.unisa.ac.za/handle/10500/24396</link>
<description>Syntactic description of neighbourhood in quadtree
Tapamo, JR
Region representation is very important in Graphics and Image Processing. This paper is concerned with a formalisation of image representation by quadtrees. Indeed an image here is a language over an alphabet of four letters. With rewriting rules we describe the neighbourhood of a node in the representation of image by quadtree.
</description>
<dc:date>2000-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://ir.unisa.ac.za/handle/10500/24395">
<title>Orthogonal axial line placement in chains and trees of orthogonal rectangles</title>
<link>https://ir.unisa.ac.za/handle/10500/24395</link>
<description>Orthogonal axial line placement in chains and trees of orthogonal rectangles
Sanders, ID; Watts, DC; Hall, AD
Previous research has shown that the orthogonal axial line placement problem for orthogonal rectangles is NP-complete in general but also that there are restrictions of the problem for which polynomial time solutions can be obtained. This article presents algorithms which solve two restricted versions of the orthogonal axial line placement problem - chains and trees of orthogonal rectangles. These restricted versions are slightly more general than the polynomial time versions previously presented.
</description>
<dc:date>2000-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://ir.unisa.ac.za/handle/10500/24394">
<title>Multilingual training of acoustic models in automatic speech recognition</title>
<link>https://ir.unisa.ac.za/handle/10500/24394</link>
<description>Multilingual training of acoustic models in automatic speech recognition
Nieuwoudt, C; Botha, EC
This paper evaluates the performance of a speech recognition system using acoustic models trained on multilingual data.&#13;
The reason in our case for using data from more than one language is that there may not be enough data available for a new language to train a robust recogniser. Two general strategies are employed: firstly, the pooling of data from the different languages for training and, secondly, the training of models on the data from one language and subsequent adaptation of the models using data from the new target language. For the first approach, English data and Afrikaans training data are pooled in order to train hidden Markov models (HMMs) for the target language, Afrikaans. For the second approach, the parameters of HMMs trained on English data are adapted using maximum a posteriori probability (MAP) and maximum likelihood linear regression (MUR) methods on Afrikaans data. Continuous density HMMs are used to model context independent phones found in Afrikaans. Cross-language adaptation performance is evaluated in terms of phone recognition performance as well as,for a continuous speech recognition task in Afrikaans. The interesting result is that,for continuous recognition the best performance is obtained by simple pooling of the data and this performance far exceeds the performance achievable using only data from the target language. The improvement is due to the fact that in our database there exists no mismatch between the English and Afrikaans data (other than the language difference) and both languages were labelled with a consistent set of labels. Adaptation results indicate that both MAP adaptation and MUR transformation of English models using Afrikaans adaptation data significantly improves model performance and also achieves better performance than achievable by direct training on the adaptation data.
</description>
<dc:date>2000-01-01T00:00:00Z</dc:date>
</item>
</rdf:RDF>
