12-14 September 2018
FU Berlin | Habelschwerdter Allee 45 | 14195 Berlin | KL32 123
Programm und weitere Informationen finden sich hier: https://bit.ly/2BFUxdJ
In 1917, commenting on the rise of new media, the french poet Apollinaire urged for “plotting/mechanising (“machiner”) poetry as has been done for the world”. A century later, the slogan’s rich metaphor is made all the sharper with the new technologies’ emergence in literary studies: The Digital Humanities (DH) have experienced an incredible gain in momentum over the last 10-15 years, providing literary scholars with copious and entirely new research data: digital editions of texts, images, musical pieces and other semiotic artifacts (accessible via Google Books, zeno org, large digital editions like PHI or the Perseus Digital Library and many others). This has also fueled advances in research methodology in the humanities, such as the influential "distant reading" approach (Moretti 2013), based on the idea of discovering patterns in searchable corpora of literature by explorative statistical methods, and using them for interpretation and literary analysis. In poetry analysis, computational tools have been devised in order to support and develop metrical and rhythmical analysis, readout poetry analysis, stylometry, and to ultimately enable computer-assisted interpretation.
Notwithstanding the success story of this data-driven approach, a “computational turn” (David M. Berry, 2011) has recently been advocated within the DH, bringing to the forefront the actual algorithms and computational techniques of pattern recognition, machine learning, and deep learning (Bishop, 2006). These technologies have become mainstream in many areas of digital processing, and have sparked much of the success of (and critical concerns about) digitization in our everyday life. They have also proven successful in all areas of natural language processing, prosodic data, and encoded textual data. Nevertheless, in literary studies as well as poetry analysis, an application of these new methods of deep learning is still lacking.
To fill this gap is the rationale of the proposed workshop. It will gather experts in computational poetry analysis as well as experts in deep learning techniques in order to improve the digital analysis of poetic language features – such as prosody, metrics, metaphors, coded text. The aim is to create opportunities for the sharing of good practice, hands-on practical applications, and the learning of specialized methods; to offer a platform for presenting and discovering ongoing research projects; and to reflect on the benefits and potential shortcomings of digital pattern recognition based on deep learning methods. Bringing these two groups into discussion means to send all contributions to the participants by electronic means before the meeting. In this way, our workshop will guarantee equal parts of expert presentations and discussions between scholars and experts of the literary and computational fields.