Header

Advancing ideational social science
through transformer-based (language) models:
a pilot study on the imagined futures of Web3

 

Through strides in scholarly research, we have a better understanding of ideas, their categories and causal effects. It is now also easier to use quantitative data to 'measure' them. Nonetheless many theoretical concepts in the field of ideas lack empirical testing.

Thanks to the development of transformers, a deep learning architecture, it is now possible to take context and sequences into consideration when analysing texts. Using massive datasets for the machine learning process, transformers can then apply the learned knowledge for text classification while being neither bound by language nor medium (audio, image, text etc.), since they work multi-lingually and multi-modally.

In this pilot study, the aim is to test these transformer-based models for their usefulness. The ideational concept to be measured are imagined futures, i. e. how expectations of what is to come in the economic sector (forecasts etc.) influence real outcomes along the way. The corpus to be analysed is public discourse around Web3 (or Web 3.0) which is a vision of a future decentralised, blockchain-based World Wide Web – considered the next step after our current Web 2.0 that is largely dominated by big technology companies.

For a more detailed overview see this poster.


Recipient: Timo Seidl (Profile)
Funding: Disruptive Innovation programme of the Austrian Academy of Sciences (ÖAW) (Disruptive Innovation)
Duration: 2024-2025

 

Links

back