logolpt fri_logo


The Laboratory for Data Technologies specialises for basic and applied research in data acquisition, representation, integration, analysis, visualisation and management. Our main research areas are briefly described below.

Network and link analysis

Network analysis strives towards revealing the structure of large and complex real-world networks. These include (online) social networks, web graphs, Internet maps, software system networks, power grids, road networks, biological and ecological networks, and other. Link analysis can be seen as data mining over relational data represented as networks. Main challenges are thus classification, clustering and ranking of network nodes in the context of their imminent neighbors, while special emphasis is put on the applicability of the developed techniques.


Information extraction

Information extraction (IE) is a subfield of Information Retrieval and it's task is to extract structured data from unstructured sources. We are mainly focusing on textual web sources. Tim Berners-Lee had a vision of semantic web which enables representation of all internet information as a semantic graph. Obviously there are very few fully semantically annotated web sources. Also in the future, a vast majority of published data will not be hand-labelled. That is why we are trying to simulate semantic web through ontology-based information extraction.  


Visualization of data

As the volume and complexity of data increases it becomes very difficult for users to effectively explore large-scale datasets. A possible solution for this problem is visualization. Visualization is a graphical representation of data. Its main purpose is to convey information clearly and efficiently through graphical means. Visualizing large amounts of data allows us to see patterns that may otherwise remain hidden and it allows us also to quickly grasp and process large amounts of data that would otherwise require a lot of time to study. Possible faults in the underlying data may be spotted easier.

Increasingly important aspects of visualization are dynamics of representation and interactivity. The parameters of visualization can be changed on demand and particular small scale features of the visualization can be explored in more detail. On this basis we can form new queries on the datasets.


Semantic web and Ontologies

Current version of World Wide Web (WWW) is consisted of several mutually connected documents that are presented to human users by computers. These documents originated in interconnected systems where every user could contribute. This also results in a fact that information quality can’t always be guaranteed. Current World Wide Web consists of data, information and knowledge, but the role of computers at this stage is only to deliver and represent the content of the documents that describe knowledge. To integrate different information resources users have to manually interpret these data.


Method engineering

Method engineering (ME) is an approach to create software development methods that are specifically attuned to organisations or projects. In general, the idea lies in the conceptualisation, construction and adaptation of methods and tools for the development of information systems. ME has a long tradition in the systems development research. An excellent review of the past research work can be found in Ralyte´ et al [1] while for the more recent effort you should check the Proceedings of the 4th IFIP WG 8.1 Working Conference on Method Engineering [2].



NoSQL represents a novel and fast-growing category of data management technologies that uses non-relational database architectures (hence, NoSQL or Not-Only SQL). NoSQL is not the best solution for every data management requirements, however, it is often better suited to handle high-performance web-scalable systems and big data analyses. Such systems include document stores, key value stores, native XML databases, graph databases, column stores, object stores, in-memory caches, multidimensional OLAP cubes and other.