portrait

Dr. Amir Semmo


Interactive graphics engineer and post-doctoral researcher in the Computer Graphics Systems group at the Hasso Plattner Institute / University of Potsdam, Germany.

Specialized in real-time rendering and shading, non-photorealistic rendering, computational aesthetics / cartographic design, and geovisualization.

August 2017

We presented the demo “Pictory: Combining Neural Style Transfer and Image Filtering” at SIGGRAPH 2017 Appy Hour. More information on the public release of “Pictory” is available here.

July 2017

We presented our paper “Neural Style Transfer: A Paradigm Shift for Image-based Artistic Rendering?” at the 2017 Symposium on Non-Photorealistic Animation and Rendering (NPAR, part of Expressive). The presentation slides are available here.

May 2017

Our paper “Neural Style Transfer: A Paradigm Shift for Image-based Artistic Rendering?” has been accepted as a contribution to the 2017 Symposium on Non-Photorealistic Animation and Rendering (NPAR, part of Expressive). We will also present the mobile app “Pictory: Combining Neural Style Transfer and Image Filtering” at SIGGRAPH 2017 Appy Hour.

December 2016

We also presented the demo “BeCasso: Artistic Image Processing and Editing on Mobile Devices” at the SIGGRAPH Asia 2016 Symposium on Mobile Graphics and Interactive Applications. The demo received a Best Demo Award.

December 2016

We presented the paper “Interactive Image Filtering with Multiple Levels-of-Control on Mobile Devices” at the SIGGRAPH Asia 2016 Symposium on Mobile Graphics and Interactive Applications.

November 2016

I successfully defended my doctoral thesis on the “Design and Implementation of Non-Photorealistic Rendering Techniques for 3D Geospatial Data”.

July 2016

We also presented the poster “Interactive Multi-scale Oil Paint Filtering on Mobile Devices” at SIGGRAPH 2016.

July 2016

We presented the demo “BeCasso: Image Stylization by Interactive Oil Paint Filtering on Mobile Devices” at SIGGRAPH 2016 Appy Hour.

May 2016

We presented the poster “Interactive Oil Paint Filtering On Mobile Devices” at Expressive 2016.

January 2016

Our journal article on “Image Stylization by Interactive Oil Paint Filtering” was published in Computers & Graphics.

2017

ProsumerFX: Mobile Design of Image Stylization Components

Tobias Dürschmid, Maximilian Söchting, Amir Semmo, Matthias Trapp, and Jürgen Döllner
SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications 2017

Abstract, BibTeX, DOI

With the continuous advances of mobile graphics hardware, high-quality image stylization—e.g., based on image filtering, stroke-based rendering, and neural style transfer—is becoming feasible and increasingly used in casual creativity apps. The creative expression facilitated by these mobile apps, however, is typically limited with respect to the usage and application of pre-defined visual styles, which ultimately do not include their design and composition—an inherent requirement of prosumers. We present ProsumerFX, a GPU-based app that enables to interactively design parameterizable image stylization components on-device by reusing building blocks of image processing effects and pipelines. Furthermore, the presentation of the effects can be customized by modifying the icons, names, and order of parameters and presets. Thereby, the customized visual styles are defined as platform-independent effects and can be shared with other users via a web-based platform and database. Together with the presented mobile app, this system approach supports collaborative works for designing visual styles, including their rapid prototyping, A/B testing, publishing, and distribution. Thus, it satisfies the needs for creative expression of both professionals as well as the general public.

@inproceedings{DSSTD17,
  author = { D{\"u}rschmid, Tobias and S{\"o}chting, Maximilian and Semmo, Amir and Trapp, Matthias and D{\"o}llner, J{\"u}rgen },
  title = { ProsumerFX: Mobile Design of Image Stylization Components },
  year = { 2017 },
  location = { Bangkok, Thailand },
  series = { Proceedings SIGGRAPH ASIA Mobile Graphics and Interactive Applications (MGIA) },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/3132787.3139208 }
}

Challenges in User Experience Design of Image Filtering Apps

Mandy Klingbeil, Sebastian Pasewaldt, Amir Semmo, and Jürgen Döllner
SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications 2017

Abstract, BibTeX, DOI

Photo filtering apps successfully deliver image-based stylization techniques to a broad audience, in particular in the ubiquitous domain (e.g., smartphones, tablet computers). Interacting with these inherently complex techniques has so far mostly been approached in two different ways: (1) by exposing many (technical) parameters to the user, resulting in a professional application that typically requires expert domain knowledge, or (2) by hiding the complexity via presets that only allows the application of filters but prevents creative expression thereon. In this work, we outline challenges of and present approaches for providing interactive image filtering on mobile devices, thereby focusing on how to make them usable for people in their daily life. This is discussed by the example of BeCasso, a user-centric app for assisted image stylization that targets two user groups: mobile artists and users seeking casual creativity. Through user research, qualitative and quantitative user studies, we identify and outline usability issues that showed to prevent both user groups from reaching their objectives when using the app. On the one hand, user-group-targeting has been improved by an optimized user experience design. On the other hand, multiple level of controls have been implemented to ease the interaction and hide the underlying complex technical parameters. Evaluations underline that the presented approach can increase the usability of complex image stylization techniques for mobile apps.

@inproceedings{KPSD17,
  author = { Klingbeil, Mandy and Pasewaldt, Sebastian and Semmo, Amir D{\"o}llner, J{\"u}rgen },
  title = { Challenges in User Experience Design of Image Filtering Apps },
  year = { 2017 },
  location = { Bangkok, Thailand },
  series = { Proceedings SIGGRAPH ASIA Mobile Graphics and Interactive Applications (MGIA) },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/3132787.3132803 }
}

Demo: Pictory - Neural Style Transfer and Editing with CoreML

Sebastian Pasewaldt, Amir Semmo, Mandy Klingbeil, and Jürgen Döllner
SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications (Demo) 2017

Abstract, BibTeX, DOI

This work presents advances in the design and implementation of Pictory, an iOS app for artistic neural style transfer and interactive image editing using the CoreML and Metal APIs. Pictory combines the benefits of neural style transfer, e.g., high degree of abstraction on a global scale, with the interactivity of GPU-accelerated stateof-the-art image-based artistic rendering on a local scale. Thereby, the user is empowered to create high-resolution, abstracted renditions in a two-stage approach. First, a photo is transformed using a pre-trained convolutional neural network to obtain an intermediate stylized representation. Second, image-based artistic rendering techniques (e.g., watercolor, oil paint or toon filtering) are used to further stylize the image. Thereby, fine-scale texture noise—introduced by the style transfer—is filtered and interactive means are provided to individually adjust the stylization effects at run-time. Based on qualitative and quantitative user studies, Pictory has been redesigned and optimized to support casual users as well as mobile artists by providing effective, yet easy to understand, tools to facilitate image editing at multiple levels of control.

@inproceedings{PSKD17,
  author = { Pasewaldt, Sebastian and Semmo, Amir and Klingbeil, Mandy and D{\"o}llner, J{\"u}rgen },
  title = { Demo: Pictory - Neural Style Transfer and Editing with CoreML },
  year = { 2017 },
  location = { Bangkok, Thailand },
  series = { Proceedings SIGGRAPH ASIA Mobile Graphics and Interactive Applications (MGIA) },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/3132787.3132815 }
}

Pictory: Combining Neural Style Transfer and Image Filtering

Amir Semmo, Matthias Trapp, Jürgen Döllner, and Mandy Klingbeil
ACM SIGGRAPH Appy Hour 2017

Abstract, BibTeX, DOI, Paper (PDF)

This work presents Pictory, a mobile app that empowers users to transform photos into artistic renditions by using a combination of neural style transfer with user-controlled state-of-the-art nonlinear image filtering. The combined approach features merits of both artistic rendering paradigms: deep convolutional neural networks can be used to transfer style characteristics at a global scale, while image filtering is able to simulate phenomena of artistic media at a local scale. Thereby, the proposed app implements an interactive two-stage process: first, style presets based on pre-trained feed-forward neural networks are applied using GPU-accelerated compute shaders to obtain initial results. Second, the intermediate output is stylized via oil paint, watercolor, or toon filtering to inject characteristics of traditional painting media such as pigment dispersion (watercolor) as well as soft color blendings (oil paint), and to filter artifacts such as fine-scale noise. Finally, on-screen painting facilitates pixel-precise creative control over the filtering stage, e. g., to vary the brush and color transfer, while joint bilateral upsampling enables outputs at full image resolution suited for printing on real canvas.

@inproceedings{STDKD17,
  author = { Semmo, Amir and Trapp, Matthias and D{\"o}llner, J{\"u}rgen and Klingbeil, Mandy },
  title = { Pictory: Combining Neural Style Transfer and Image Filtering },
  year = { 2017 },
  location = { Los Angeles, California },
  pages = { 5:1--5:2 },
  series = { Proceedings SIGGRAPH Appy Hour },
  month = { 8 },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/3098900.3098906 }
}

Neural Style Transfer: A Paradigm Shift for Image-based Artistic Rendering?

Amir Semmo, Tobias Isenberg, and Jürgen Döllner
Proceedings International Symposium on Non-Photorealistic Animation and Rendering (NPAR) 2017

Abstract, BibTeX, DOI, Paper (PDF), Slides (PDF, 19.8 MiB)

In this meta paper we discuss image-based artistic rendering (IB-AR) based on neural style transfer (NST) and argue, while NST may represent a paradigm shift for IB-AR, that it also has to evolve as an interactive tool that considers the design aspects and mechanisms of artwork production. IB-AR received significant attention in the past decades for visual communication, covering a plethora of techniques to mimic the appeal of artistic media. Example-based rendering represents one the most promising paradigms in IB-AR to (semi-)automatically simulate artistic media with high fidelity, but so far has been limited because it relies on pre-defined image pairs for training or informs only low-level image features for texture transfers. Advancements in deep learning showed to alleviate these limitations by matching content and style statistics via activations of neural network layers, thus making a generalized style transfer practicable. We categorize style transfers within the taxonomy of IB-AR, then propose a semiotic structure to derive a technical research agenda for NSTs with respect to the grand challenges of NPAR. We finally discuss the potentials of NSTs, thereby identifying applications such as casual creativity and art production.

@inproceedings{SID17,
  author = { Semmo, Amir and Isenberg, Tobias and D{\"o}llner, J{\"u}rgen },
  title = { Neural Style Transfer: A Paradigm Shift for Image-based Artistic Rendering? },
  year = { 2017 },
  location = { Los Angeles, California },
  pages = { 5:1--5:13 },
  series = { Proceedings International Symposium on Non-Photorealistic Animation and Rendering (NPAR) },
  month = { 7 },
  editor = { Holger Winnem{\"o}ller and Lyn Bartram },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/3092919.3092920 }
}
2016

Interactive Image Filtering with Multiple Levels-of-Control on Mobile Devices

Amir Semmo, Tobias Dürschmid, Matthias Trapp, Mandy Klingbeil, Jürgen Döllner, and Sebastian Pasewaldt
SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications 2016

Abstract, BibTeX, DOI, Paper (PDF), Filter Results (Images / ZIP, 34 MiB)

With the continuous development of mobile graphics hardware, interactive high-quality image stylization based on nonlinear filtering is becoming feasible and increasingly used in casual creativity apps. However, these apps often only serve high-level controls to parameterize image filters and generally lack support for low-level (artistic) control, thus automating art creation rather than assisting it. This work presents a GPU-based framework that enables to parameterize image filters at three levels of control: (1) presets followed by (2) global parameter adjustments can be interactively refined by (3) complementary on-screen painting that operates within the filters' parameter spaces for local adjustments. The framework provides a modular XML-based effect scheme to effectively build complex image processing chains-using these interactive filters as building blocks-that can be efficiently processed on mobile devices. Thereby, global and local parameterizations are directed with higher-level algorithmic support to ease the interactive editing process, which is demonstrated by state-of-the-art stylization effects, such as oil paint filtering and watercolor rendering.

@inproceedings{SDTKDP16,
  author = { Semmo, Amir and D{\"u}rschmid, Tobias and Trapp, Matthias and Klingbeil, Mandy and D{\"o}llner, J{\"u}rgen and Pasewaldt, Sebastian },
  title = { Interactive Image Filtering with Multiple Levels-of-control on Mobile Devices },
  year = { 2016 },
  location = { Macau },
  pages = { 2:1--2:8 },
  series = { Proceedings SIGGRAPH ASIA Mobile Graphics and Interactive Applications (MGIA) },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/2999508.2999521 }
}

BeCasso: Artistic Image Processing and Editing on Mobile Devices

Sebastian Pasewaldt, Amir Semmo, Jürgen Döllner, and Frank Schlegel
SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications (Demo) 2016

Abstract, BibTeX, DOI, Paper (PDF)

BeCasso is a mobile app that enables users to transform photos into high-quality, high-resolution non-photorealistic renditions, such as oil and watercolor paintings, cartoons, and colored pencil drawings, which are inspired by real-world paintings or drawing techniques. In contrast to neuronal network and physically-based approaches, the app employs state-of-the-art nonlinear image filtering. For example, oil paint and cartoon effects are based on smoothed structure information to interactively synthesize renderings with soft color transitions. BeCasso empowers users to easily create aesthetic renderings by implementing a two-fold strategy: First, it provides parameter presets that may serve as a starting point for a custom stylization based on global parameter adjustments. Thereby, users can obtain initial renditions that may be fine-tuned afterwards. Second, it enables local style adjustments: using on-screen painting metaphors, users are able to locally adjust different stylization features, e.g., to vary the level of abstraction, pen, brush and stroke direction or the contour lines. In this way, the app provides tools for both higher-level interaction and low-level control to serve the different needs of non-experts and digital artists.

@inproceedings{PSDS16,
  author = { Pasewaldt, Sebastian and Semmo, Amir and D{\"o}llner, J{\"u}rgen and Schlegel, Frank },
  title = { Becasso: Artistic Image Processing and Editing on Mobile Devices },
  year = { 2016 },
  location = { Macau },
  pages = { 14:1--14:1 },
  series = { Proceedings SIGGRAPH ASIA Mobile Graphics and Interactive Applications (MGIA) },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/2999508.2999518 }
}

Design and Implementation of Non-Photorealistic Rendering Techniques for 3D Geospatial Data

Amir Semmo
Doctoral Thesis, Hasso Plattner Institute at the University of Potsdam 2016

Abstract, BibTeX, URN, Thesis (PDF)

This thesis proposes non-photorealistic rendering techniques that enable both the computation and selection of the abstraction level of 3D geospatial model contents according to user interaction and dynamically changing thematic information. To achieve this goal, the techniques integrate with hardware-accelerated rendering pipelines using shader technologies of graphics processing units for real-time image synthesis. The techniques employ principles of artistic rendering, cartographic generalization, and 3D semiotics—unlike photorealistic rendering—to synthesize illustrative renditions of geospatial feature type entities such as water surfaces, buildings, and infrastructure networks. In addition, this thesis contributes a generic system that enables to integrate different graphic styles—photorealistic and non-photorealistic—and provide their seamless transition according to user tasks, camera view, and image resolution.

@phdthesis{S16,
  author = { Amir Semmo },
  title = { Design and Implementation of Non-Photorealistic Rendering Techniques for 3D Geospatial Data },
  school = { Hasso Plattner Institute at the University of Potsdam },
  year = { 2016 },
  month = { 11 },
  address = { Potsdam, Germany },
  url = { http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-99525 }
}

BeCasso: Image Stylization by Interactive Oil Paint Filtering on Mobile Devices

Amir Semmo, Jürgen Döllner, and Frank Schlegel
ACM SIGGRAPH Appy Hour 2016

Abstract, BibTeX, DOI, Paper (PDF)

BeCasso is a mobile app that enables users to transform photos into an oil paint look that is inspired by traditional painting elements. In contrast to stroke-based approaches, the app uses state-of-the-art nonlinear image filtering techniques based on smoothed structure information to interactively synthesize oil paint renderings with soft color transitions. BeCasso empowers users to easily create aesthetic oil paint renderings by implementing a two-fold strategy. First, it provides parameter presets that may serve as a starting point for a custom stylization based on global parameter adjustments. Second, it introduces a novel interaction approach that operates within the parameter spaces of the stylization effect to facilitate creative control over the visual output: on-screen painting enables users to locally adjust the appearance in image regions, e.g., to vary the level of abstraction, brush and stroke direction. This way, the app provides tools for both higher-level interaction and low-level control to serve the different needs of non-experts and digital artists.

@inproceedings{SDS2016,
  author = { Semmo, Amir and D{\"o}llner, J{\"u}rgen and Schlegel, Frank },
  title = { BeCasso: Image Stylization by Interactive Oil Paint Filtering on Mobile Devices },
  booktitle = { Proceedings SIGGRAPH Appy Hour },
  year = { 2016 },
  month = { 7 },
  location = { Anaheim, California },
  pages = { 6:1--6:1 },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/2936744.2936750 }
}

Interactive Multi-scale Oil Paint Filtering on Mobile Devices

Amir Semmo, Mattias Trapp, Tobias Dürschmid, Jürgen Döllner, and Sebastian Pasewaldt
ACM SIGGRAPH Posters 2016

Abstract, BibTeX, DOI, Paper (PDF)

This work presents an interactive mobile implementation of a filter that transforms images into an oil paint look. At this, a multi-scale approach that processes image pyramids is introduced that uses flow-based joint bilateral upsampling to achieve deliberate levels of abstraction at multiple scales and interactive frame rates. The approach facilitates the implementation of interactive tools that adjust the appearance of filtering effects at run-time, which is demonstrated by an on-screen painting interface for per-pixel parameterization that fosters the casual creativity of non-artists.

@inproceedings{STDDP2016,
  author = { Semmo, Amir and Trapp, Matthias and D{\"u}rschmid, Tobias and D{\"o}llner, J{\"u}rgen and Pasewaldt, Sebastian },
  title = { Interactive Multi-scale Oil Paint Filtering on Mobile Devices },
  booktitle = { SIGGRAPH Posters },
  year = { 2016 },
  location = { Anaheim, California },
  pages = { 42:1--42:2 },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/2945078.2945120 }
}

Interactive Oil Paint Filtering On Mobile Devices

Amir Semmo, Matthias Trapp, Sebastian Pasewaldt, and Jürgen Döllner
Expressive Poster Session 2016

Abstract, BibTeX, DOI, Paper (PDF)

Image stylization enjoys a growing popularity on mobile devices to foster casual creativity. However, the implementation and provision of high-quality image filters for artistic rendering is still faced by the inherent limitations of mobile graphics hardware such as computing power and memory resources. This work presents a mobile implementation of a filter that transforms images into an oil paint look, thereby highlighting concepts and techniques on how to perform multi-stage nonlinear image filtering on mobile devices. The proposed implementation is based on OpenGL ES and the OpenGL ES shading language, and supports on-screen painting to interactively adjust the appearance in local image regions, e.g., to vary the level of abstraction, brush, and stroke direction. Evaluations of the implementation indicate interactive performance and results that are of similar aesthetic quality than its original desktop variant.

@inproceedings{STPD2016,
  author = { Semmo, Amir and Trapp, Matthias and Pasewaldt, Sebastian and D{\"o}llner, J{\"u}rgen },
  title = { Interactive Oil Paint Filtering On Mobile Devices },
  booktitle = { Expressive - Posters, Artworks, and Bridging Papers },
  year = { 2016 },
  editor = { Ergun Akleman, Lyn Bartram, Anıl Çamcı, Angus Forbes, Penousal Machado },
  publisher = { The Eurographics Association },
  doi = { 10.2312/exp.20161255 }
}

Image Stylization by Interactive Oil Paint Filtering

Amir Semmo, Daniel Limberger, Jan Eric Kyprianidis, and Jürgen Döllner
Computers & Graphics 2016

Abstract, BibTeX, DOI, Paper (PDF), Filter Results (Images / ZIP, 191 MiB)

This paper presents an interactive system for transforming images into an oil paint look. The system comprises two major stages. First, it derives dominant colors from an input image for feature-aware recolorization and quantization to conform with a global color palette. Afterwards, it employs non-linear filtering based on the smoothed structure adapted to the main feature contours of the quantized image to synthesize a paint texture in real-time. Our filtering approach leads to homogeneous outputs in the color domain and enables creative control over the visual output, such as color adjustments and per-pixel parametrizations by means of interactive painting. To this end, our system introduces a generalized brush-based painting interface that operates within parameter spaces to locally adjust the level of abstraction of the filtering effects. Several results demonstrate the various applications of our filtering approach to different genres of photography.

@article{SLKD16,
  author = { Semmo, Amir and Limberger, Daniel and Kyprianidis, Jan Eric and D{\"o}llner, J{\"u}rgen },
  title = { Image Stylization by Interactive Oil Paint Filtering },
  journal = { Computers \& Graphics },
  year = { 2016 },
  doi = { 10.1016/j.cag.2015.12.001 }
}
2015

Cartography-Oriented Design of 3D Geospatial Information Visualization - Overview and Techniques

Amir Semmo, Matthias Trapp, Markus Jobst, and Jürgen Döllner
The Cartographic Journal (International Cartographic Conference) 2015

Abstract, BibTeX, DOI, Paper / Preprint (PDF), Link to Journal

In economy, society and personal life map-based, interactive geospatial visualization becomes a natural element of a growing number of applications and systems. The visualization of 3D geospatial information, however, raises the question how to represent the information in an effective way. Considerable research has been done in technology-driven directions in the fields of cartography and computer graphics (e.g., design principles, visualization techniques). Here, non-photorealistic rendering represents a promising visualization category–situated between both fields–that offers a large number of degrees for the cartography-oriented visual design of complex 2D and 3D geospatial information for a given application context. Still today, however, specifications and techniques for mapping cartographic design principles to the state-of-the-art rendering pipeline of 3D computer graphics remain to be explored. This paper revisits cartographic design principles for 3D geospatial visualization and introduces an extended 3D semiotic model that complies with the general, interactive visualization pipeline. Based on this model, we propose non-photorealistic rendering techniques to interactively synthesize cartographic renditions of basic feature types, such as terrain, water, and buildings. In particular, it includes a novel iconification concept to seamlessly interpolate between photorealistic and cartographic representations of 3D landmarks. Our work concludes with a discussion of open challenges in this field of research, including topics such as user interaction and evaluation.

@article{SD2015,
  author = { Semmo, Amir and D{\"o}llner, J{\"u}rgen },
  title = { Cartography-Oriented Design of 3D Geospatial Information Visualization - Overview and Techniques },
  journal = { The Cartographic Journal },
  year = { 2015 },
  volume = { 52 },
  number = { 2 },
  pages = { 95--106 },
  doi = { 10.1080/00087041.2015.1119462 }
}

Image Stylization by Oil Paint Filtering using Color Palettes

Amir Semmo, Daniel Limberger, Jan Eric Kyprianidis, and Jürgen Döllner
Proceedings International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe) 2015

Abstract, BibTeX, DOI, Paper (PDF), Filter Results (Images / ZIP, 191 MiB)

This paper presents an approach for transforming images into an oil paint look. To this end, a color quantization scheme is proposed that performs feature-aware recolorization using the dominant colors of the input image. In addition, an approach for real-time computation of paint textures is presented that builds on the smoothed structure adapted to the main feature contours of the quantized image. Our stylization technique leads to homogeneous outputs in the color domain and enables creative control over the visual output, such as color adjustments and per-pixel parametrizations by means of interactive painting.

@inproceedings{SLKD14,
  author = { Semmo, Amir and Limberger, Daniel and Kyprianidis, Jan Eric and D{\"o}llner, J{\"u}rgen },
  title = { Image Stylization by Oil Paint Filtering using Color Palettes },
  booktitle = { Proceedings International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe) },
  year = { 2015 },
  pages = { 149--158 },
  month = { 6 },
  doi = { 10.2312/exp.20151188 }
}

Interactive Image Filtering for Level-of-Abstraction Texturing of Virtual 3D Scenes

Amir Semmo and Jürgen Döllner
Computers & Graphics 2015

Abstract, BibTeX, DOI, Paper (PDF)

Texture mapping is a key technology in computer graphics. For the visual design of 3D scenes, in particular, effective texturing depends significantly on how important contents are expressed, e.g., by preserving global salient structures, and how their depiction is cognitively processed by the user in an application context. Edge-preserving image filtering is one key approach to address these concerns. Much research has focused on applying image filters in a post-process stage to generate artistically stylized depictions. However, these approaches generally do not preserve depth cues, which are important for the perception of 3D visualization (e.g., texture gradient). To this end, filtering is required that processes texture data coherently with respect to linear perspective and spatial relationships. In this work, we present an approach for texturing 3D scenes with perspective coherence by arbitrary image filters. We propose decoupled deferred texturing with (1) caching strategies to interactively perform image filtering prior to texture mapping and (2) for each mipmap level separately to enable a progressive level of abstraction, using (3) direct interaction interfaces to parameterize the visualization according to spatial, semantic, and thematic data. We demonstrate the potentials of our method by several applications using touch or natural language inputs to serve the different interests of users in specific information, including illustrative visualization, focus+context visualization, geometric detail removal, and semantic depth of field. The approach supports frame-to-frame coherence, order-independent transparency, multitexturing, and content-based filtering. In addition, it seamlessly integrates into real-time rendering pipelines, and is extensible for custom interaction techniques.

@article{SD2015,
  author = { Semmo, Amir and D{\"o}llner, J{\"u}rgen },
  title = { Interactive Image Filtering for Level-of-Abstraction Texturing of Virtual 3D Scenes },
  journal = { Computers \& Graphics },
  year = { 2015 },
  volume = { 52 },
  pages = { 181--198 },
  doi = { 10.1016/j.cag.2015.02.001 }
}

Interactive Rendering and Stylization of Transportation Networks Using Distance Fields

Matthias Trapp, Amir Semmo, and Jürgen Döllner
Proceedings of the 10th International Conference on Computer Graphics Theory and Applications (GRAPP) 2015

Abstract, BibTeX, DOI, Paper (PDF)

Transportation networks, such as streets, railroads or metro systems, constitute primary elements in cartography for reckoning and navigation. In recent years, they have become an increasingly important part of 3D virtual environments for the interactive analysis and communication of complex hierarchical information, for example in routing, logistics optimization, and disaster management. A variety of rendering techniques have been proposed that deal with integrating transportation networks within these environments, but have so far neglected the many challenges of an interactive design process to adapt their spatial and thematic granularity (i.e., level-of-detail and level-of-abstraction) according to a user's context. This paper presents an efficient real-time rendering technique for the view-dependent rendering of geometrically complex transportation networks within 3D virtual environments. Our technique is based on distance fields using deferred texturing that shifts the design process to the shading stage for real-time stylization. We demonstrate and discuss our approach by means of street networks using cartographic design principles for context-aware stylization, including view-dependent scaling for clutter reduction, contour-lining to provide figure-ground, handling of street crossings via shading-based blending, and task-dependent colorization. Finally, we present potential usage scenarios and applications together with a performance evaluation of our implementation.

@inproceedings{TSD2015,
  author = { Trapp, Matthias and Semmo, Amir and D{\"o}llner, J{\"u}rgen },
  title = { Interactive Rendering and Stylization of Transportation Networks Using Distance Fields },
  booktitle = { Proceedings of the 10th International Conference on Computer Graphics Theory and Applications (GRAPP) },
  year = { 2015 },
  pages = { 207--219 },
  doi = { 10.5220/0005310502070219 }
}
2014

An Interaction Framework for Level-of-Abstraction Visualization of 3D Geovirtual Environments

Amir Semmo and Jürgen Döllner
Proceedings 2nd ACM SIGSPATIAL Workshop on MapInteraction 2014

Abstract, BibTeX, DOI, Paper (PDF)

3D geovirtual environments constitute effective media for the analysis and communication of complex geospatial data. Today, these environments are often visualized using static graphical variants (e.g., 2D maps, 3D photorealistic) from which a user is able to choose from. To serve the different interests of users in specific information, however, the spatial and thematic granularity at which model contents are represented (i.e., level of abstraction) should be dynamically adapted to the user's context, which requires specialized interaction techniques for parameterization. In this work, we present a framework that enables interaction interfaces to parameterize the level-of-abstraction visualization according to spatial, semantic, and thematic data. The framework is implemented in a visualization system that provides image-based rendering techniques for context-aware abstraction and highlighting. Using touch and natural language interfaces, we demonstrate its versatile application to geospatial tasks, including exploration, navigation, and orientation.

@inproceedings{SD2014_3,
  author = { Semmo, Amir and D{\"o}llner, J{\"u}rgen },
  title = { An Interaction Framework for Level-of-Abstraction Visualization of 3D Geovirtual Environments },
  booktitle = { Proceedings 2nd ACM SIGSPATIAL Workshop on MapInteraction (MapInteract) },
  year = { 2014 },
  month = { 11 },
  location = { Dallas/Fort Worth, Texas },
  pages = { 43--49 },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/2677068.2677072 }
}

Multi-Perspective 3D Panoramas

Sebastian Pasewaldt, Amir Semmo, Matthias Trapp, and Jürgen Döllner
International Journal of Geographical Information Science (IJGIS) 2014

Abstract, BibTeX, DOI, Paper (PDF)

This article presents multi-perspective 3D panoramas that focus on visualizing 3D geovirtual environments (3D GeoVEs) for navigation and exploration tasks. Their key element, a multi-perspective view, seamlessly combines what is seen from multiple viewpoints into a single image. This approach facilitates thepresentation of information for virtual 3D city and landscape models, particularly by reducing occlusions, increasing screen-space utilization, and providing additional context within a single image. We complement multi-perspective views with cartographic visualization techniques to stylize features according to their semantics and highlight important or prioritized information. When combined, both techniques constitute the core implementation of interactive, multi-perspective 3D panoramas. They offer a large number of effective means for visual communication of 3D spatial information, a high degree of customization with respect to cartographic design, and manifold applications in different domains. We discuss design decisions of 3D panoramas for the exploration of and navigation in 3D GeoVEs. We also discuss a preliminary user study that indicates that 3D panoramas are a promising approach for navigation systems using 3D GeoVEs.

@article{PSTD2014,
  author = { Pasewaldt, Sebastian and Semmo, Amir and Trapp, Matthias and D{\"o}llner, J{\"u}rgen },
  title = { Multi-Perspective 3D Panoramas },
  journal = { International Journal of Geographical Information Science (IJGIS) },
  year = { 2014 },
  volume = { 28 },
  pages = { 2030--2051 },
  number = { 10 },
  doi = { 10.1080/13658816.2014.922686 }
}

Oil Paint Filtering Using Color Palettes For Colorization

Amir Semmo and Jürgen Döllner
Expressive Poster Session 2014

Abstract, BibTeX, Paper (PDF)

We present a novel technique for oil paint filtering that uses color palettes for colorization. First, dominant feature-aware colors are derived from the input image via entropy-based metrics. Seed pixels are then determined and propagated to the remaining pixels by adopting the optimization framework of Levin et al. [2004] for feature-aware colorization. Finally, the quantized output is combined with flow-based highlights and contour lines to simulate paint texture. Our technique leads to homogeneous outputs in the color domain and enables interactive control over color definitions.

@misc{SD2014_2,
  author = { Semmo, Amir and D{\"o}llner, J{\"u}rgen },
  title = { Oil Paint Filtering Using Color Palettes For Colorization },
  booktitle = { Expressive Poster Session },
  year = { 2014 }
}

Image Filtering for Interactive Level-of-Abstraction Visualization of 3D Scenes

Amir Semmo and Jürgen Döllner
Proceedings International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe) 2014

Abstract, BibTeX, DOI, Paper (PDF), Additional Material (PDF), Video (Youtube)

Texture mapping is a key technology in computer graphics for visual design of rendered 3D scenes. An effective information transfer of surface properties, encoded by textures, however, depends significantly on how important information is highlighted and cognitively processed by the user in an application context. Edge-preserving image filtering is a promising approach to address this concern while preserving global salient structures. Much research has focused on applying image filters in a post-process stage to foster an artistically stylized rendering, but these approaches are generally not able to preserve depth cues important for 3D visualization (e.g., texture gradient). To this end, filtering that processes texture data coherently with respect to linear perspective and spatial relationships is required. In this work, we present a system that enables to process textured 3D scenes with perspective coherence by arbitrary image filters. We propose decoupled deferred texturing with (1) caching strategies to interactively perform image filtering prior to texture mapping, and (2) for each mipmap level separately to enable a progressive level of abstraction. We demonstrate the potentials of our methods on several applications, including illustrative visualization, focus+context visualization, geometric detail removal, and depth of field. Our system supports frame-to-frame coherence, order-independent transparency, multitexturing, and content-based filtering.

@inproceedings{SD14,
  author = { Semmo, Amir and D{\"o}llner, J{\"u}rgen },
  title = { Image Filtering for Interactive Level-of-Abstraction Visualization of 3D Scenes },
  booktitle = { Proceedings International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe) },
  year = { 2014 },
  pages = { 5--14 },
  month = { 8 },
  doi = { 10.1145/2630099.2630101 }
}
2013

Interactive Projective Texturing for Non-Photorealistic Shading of Technical 3D Models

Roland Lux, Matthias Trapp, Amir Semmo, and Jürgen Döllner
Proceedings of 11th Theory and Practice of Computer Graphics 2013 Conference (TP.CG) 2013

Abstract, BibTeX, Vido (Youtube), Paper (PDF)

This paper presents a novel interactive rendering technique for creating and editing shadings for man-made objects in technical 3D visualizations. In contrast to shading approaches that use intensities computed based on surface normals (e.g., Phong, Gooch, Toon shading), the presented approach uses one-dimensional gradient textures, which can be parametrized and interactively manipulated based on per-object bounding volume approximations. The fully hardware-accelerated rendering technique is based on projective texture mapping and customizable intensity transfer functions. A provided performance evaluation shows comparable results to traditional normal-based shading approaches. The work also introduce simple direct-manipulation metaphors that enables interactive user control of the gradient texture alignment and intensity transfer functions.

@inproceedings{LTSD13,
  author = { Lux, Roland and Trapp, Matthias and Semmo, Amir and D{\"o}llner, J{\"u}rgen },
  title = { Interactive Projective Texturing for Non-Photorealistic Shading of Technical 3D Models },
  booktitle = { Proceedings of 11th Theory and Practice of Computer Graphics Conference (TP.CG) },
  year = { 2013 },
  editor = { Silvester Czanner, Wen Tang },
  pages = { 101--108 },
  month = { 9 },
  publisher = { The Eurographics Association },
  isbn = { 978-3-905673-98-2 }
}

Real-Time Rendering of Water Surfaces with Cartography-Oriented Design

Amir Semmo, Jan Eric Kyprianidis, Matthias Trapp, and Jürgen Döllner
Proceedings International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe) 2013

Abstract, BibTeX, DOI, Paper (PDF), Presentation Slides (PDF), Video (Youtube)

More than 70% of the Earth's surface is covered by oceans, seas, and lakes, making water surfaces one of the primary elements in geospatial visualization. Traditional approaches in computer graphics simulate and animate water surfaces in the most realistic ways. However, to improve orientation, navigation, and analysis tasks within 3D virtual environments, these surfaces need to be carefully designed to enhance shape perception and land-water distinction. We present an interactive system that renders water surfaces with cartography-oriented design using the conventions of mapmakers. Our approach is based on the observation that hand-drawn maps utilize and align texture features to shorelines with non-linear distance to improve figure-ground perception and express motion. To obtain local orientation and principal curvature directions, first, our system computes distance and feature-aligned distance maps. Given these maps, waterlining, water stippling, contour-hatching, and labeling are applied in real-time with spatial and temporal coherence. The presented methods can be useful for map exploration, landscaping, urban planning, and disaster management, which is demonstrated by various real-world virtual 3D city and landscape models.

@inproceedings{SKTD13,
  author = { Semmo, Amir and Kyprianidis, Jan Eric and Trapp, Matthias and D{\"o}llner, J{\"u}rgen },
  title = { Real-Time Rendering of Water Surfaces with Cartography-Oriented Design },
  year = { 2013 },
  series = { Proceedings International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe) },
  pages = { 5--14 },
  month = { 7 },
  doi = { 10.1145/2487276.2487277 }
}
2012

Towards Comprehensible Digital 3D Maps

Sebastian Pasewaldt, Amir Semmo, Matthias Trapp, and Jürgen Döllner
Proceedings Service-Oriented Mapping (SOMAP) 2012

Abstract, BibTeX, Paper (PDF), Slides (PDF)

Digital mapping services have become fundamental tools in economy and society to provide domain experts and non-experts with customized, multi-layered map contents. In particular because of the continuous advancements in the acquisition, provision, and visualization of virtual 3D city and landscape models, 3D mapping services, today, represent key components to a growing number of applications, like car navigation, education, or disaster management. However, current systems and applications providing digital 3D maps are faced by drawbacks and limitations, such as occlusion, visual clutter, or insufficient use of screen space, that impact an effective comprehension of geoinformation. To this end, cartographers and computer graphics engineers developed design guidelines, rendering and visualization techniques that aim to increase the effectiveness and expressiveness of digital 3D maps, but whose seamless combination has yet to be achieved. This work discusses potentials of digital 3D maps that are based on combining cartography-oriented rendering techniques and multi-perspective views. For this purpose, a classification of cartographic design principles, visualization techniques, as well as suitable combinations are identified that aid comprehension of digital 3D maps. According to this classification, a prototypical implementation demonstrates the benefits of multi-perspective and non-photorealistic rendering techniques for visualization of 3D map contents. In particular, it enables (1) a seamless combination of cartography-oriented and photorealistic graphic styles while (2) increasing screen-space utilization, and (3) simultaneously directing a viewer’s gaze to important or prioritized information.

@inproceedings{PSTD12,
  author = { Pasewaldt, Sebastian and Semmo, Amir and Trapp, Matthias and D{\"o}llner, J{\"u}rgen },
  title = { Towards Comprehensible Digital 3D Maps },
  booktitle = { Service-Oriented Mapping (SOMAP) },
  year = { 2012 },
  editor = { Markus Jobst },
  pages = { 261--276 },
  month = { 11 },
  organization = { International Cartographic Association },
  publisher = { Jobstmedia Management Verlag, Wien }
}

Evaluating the Perceptual Impact of Rendering Techniques on Thematic Color Mappings in 3D Virtual Environments

Juri Engel, Amir Semmo, Matthias Trapp, and Jürgen Döllner
Proceedings 18th International Workshop on Vision, Modeling and Visualization (VMV) 2012

Abstract, BibTeX, DOI, Paper (PDF)

Using colors for thematic mapping is a fundamental approach in visualization, and has become essential for 3D virtual environments to effectively communicate multidimensional, thematic information. Preserving depth cues within these environments to emphasize spatial relations between geospatial features remains an important issue.A variety of rendering techniques have been developed to preserve depth cues in 3D information visualization, including shading, global illumination, and image stylization. However, these techniques alter color values, which may lead to ambiguity in a color mapping and loss of information. Depending on the applied rendering techniques and color mapping, this loss should be reduced while still preserving depth cues when communicating thematic information. This paper presents the results of a quantitative and qualitative user study that evaluates the impact of rendering techniques on information and spatial perception when using visualization of thematic data in 3D virtual environments. We report the results of this study with respect to four perception-related tasks, showing significant differences in error rate and task completion time for different rendering techniques and color mappings.

@inproceedings{ESTD13,
  author = { Engel, Juri and Semmo, Amir and Trapp, Matthias and D{\"o}llner, J{\"u}rgen },
  title = { Evaluating the Perceptual Impact of Rendering Techniques
	on Thematic Color Mappings in 3D Virtual Environments },
  booktitle = { Proceedings Vision, Modeling \& Visualization },
  year = { 2013 },
  pages = { 25--32 },
  doi = { 10.2312/PE.VMV.VMV13.025-032 }
}

Interactive Visualization of Generalized Virtual 3D City Models using Level-of-Abstraction Transitions

Amir Semmo, Matthias Trapp, Jan Eric Kyprianidis, and Jürgen Döllner
Computer Graphics Forum (Proceedings EuroVis) 2012

Abstract, BibTeX, DOI, Paper (PDF), Video (Youtube)

Virtual 3D city models play an important role in the communication of complex geospatial information in a growing number of applications, such as urban planning, navigation, tourist information, and disaster management. In general, homogeneous graphic styles are used for visualization. For instance, photorealism is suitable for detailed presentations, and non-photorealism or abstract stylization is used to facilitate guidance of a viewer's gaze to prioritized information. However, to adapt visualization to different contexts and contents and to support saliency-guided visualization based on user interaction or dynamically changing thematic information, a combination of different graphic styles is necessary. Design and implementation of such combined graphic styles pose a number of challenges, specifically from the perspective of real-time 3D visualization. In this paper, the authors present a concept and an implementation of a system that enables different presentation styles, their seamless integration within a single view, and parametrized transitions between them, which are defined according to tasks, camera view, and image resolution. The paper outlines potential usage scenarios and application fields together with a performance evaluation of the implementation.

@aarticle{STKD12,
  author = { Semmo, Amir and Trapp, Matthias and Kyprianidis, Jan Eric and D{\"o}llner, J{\"u}rgen },
  title = { Interactive Visualization of Generalized Virtual 3D City Models using Level-of-Abstraction Transitions },
  journal = { Computer Graphics Forum },
  year = { 2012 },
  volume = { 31 },
  pages = { 885--894 },
  number = { 3 },
  note = { Proceedings EuroVis 2012 },
  doi = { 10.1111/j.1467-8659.2012.03081.x }
}

Concepts for Cartography-Oriented Visualization of Virtual 3D City Models

Amir Semmo, Dieter Hildebrandt, Matthias Trapp, and Jürgen Döllner
Photogrammetrie - Fernerkundung - Geoinformation (PFG) 2012

Abstract, BibTeX, DOI, Paper (PDF)

Virtual 3D city models serve as an effective medium with manifold applications in geoinformation systems and services. To date, most 3D city models are visualized using photorealistic graphics. But an effective communication of geoinformation significantly depends on how important information is designed and cognitively processed in the given application context. One possibility to visually emphasize important information is based on non-photorealistic rendering, which comprehends artistic depiction styles and is characterized by its expressiveness and communication aspects. However, a direct application of non-photorealistic rendering techniques primarily results in monotonic visualization that lacks cartographic design aspects. In this work, we present concepts for cartography-oriented visualization of virtual 3D city models. These are based on coupling non-photorealistic rendering techniques and semantics-based information for a user, context, and media-dependent representation of thematic information. This work highlights challenges for cartography-oriented visualization of 3D geovirtual environments, presents stylization techniques and discusses their applications and ideas for a standardized visualization. In particular, the presented concepts enable a real-time and dynamic visualization of thematic geoinformation.

@article{SHTD2012,
  author = { Semmo, Amir and Hildebrandt, Dieter and Trapp, Matthias and D{\"o}llner, J{\"u}rgen },
  title = { Concepts for Cartography-Oriented Visualization of Virtual 3D City Models },
  journal = { Photogrammetrie - Fernerkundung - Geoinformation (PFG) },
  year = { 2012 },
  pages = { 455--465 },
  number = { 4 },
  doi = { 10.1127/1432-8364/2012/0131 },
  issn = { 1432-8364 },
  publisher = { E. Schweizerbart'sche Verlagsbuchhandlung }
}

Colonia 3D - Communication of Virtual 3D Reconstructions in Public Spaces

Matthias Trapp, Amir Semmo, Rafael Pokorski, Claus-Daniel Herrmann, Jürgen Döllner, Michael Eichhorn, and Michael Heinzelmann
International Journal of Heritage in the Digital Era (IJHDE) 2012

Abstract, BibTeX, DOI, Paper (PDF)

The communication of cultural heritage in public spaces such as museums or exhibitions, gain more and more importance during the last years. The possibilities of interactive 3D applications open a new degree of freedom beyond the mere presentation of static visualizations, such as pre-produced video or image data. A user is now able to directly interact with 3D virtual environments that enable the depiction and exploration of digital cultural heritage artifacts in real-time. However, such technology requires concepts and strategies for guiding a user throughout these scenarios, since varying levels of experiences within interactive media can be assumed. This paper presents a concept as well as implementation for communication of digital cultural heritage in public spaces, by example of the project Roman Cologne. It describes the results achieved by an interdisciplinary team of archaeologists, designers, and computer graphics engineers with the aim to virtually reconstruct an interactive high-detail 3D city model of Roman Cologne.

@article{TSPHDEH12,
  author = { Trapp, Matthias and Semmo, Amir and Pokorski, Rafael and Herrmann,
	Claus-Daniel and D{\"o}llner, J{\"u}rgen and Eichhorn, Michael and Heinzelmann, Michael },
  title = { Colonia 3D - Communication of Virtual 3D Reconstructions in Public Spaces },
  journal = { International Journal of Heritage in the Digital Era (IJHDE) },
  year = { 2012 },
  volume = { 1 },
  pages = { 45--74 },
  number = { 1 },
  month = { 1 },
  doi = { 10.1260/2047-4970.1.1.45 },
  editor = { Marinos Ioannides },
  publisher = { Multi-Science Publishing }
}
2011

Colonia3D

Matthias Trapp, Amir Semmo, and Jürgen Döllner
Proceedings 9. Konferenz Kultur und Informatik - Multimediale Systeme 2011

Abstract, BibTeX, Paper (PDF)

Dieser Beitrag stellt die Ergebnisse des interdisziplinären Projektes Colonia3D - Visualisierung des Römischen Kölns vor. Die digitale 3D Rekonstruktion des antiken Köln ist das Ergebnis eines gemeinsamen Forschungsprojekts des Archäologischen Instituts der Universität zu Köln, der Köln International School of Design (KISD) der Fachhochschule Köln, des Hasso-Plattner Instituts an der Universität Potsdam und des Römisch Germanischen Museums (RGM) Köln. Der Beitrag präsentiert die wesentlichen Konzepte dieses interaktiven, auf Museen ausgerichteten 3D-Informationssystems, beschreibt verschiedene Präsentationsmodi und deren technische Umsetzung. Er diskutiert Vorgehensweisen und Interaktionskonzepte, die den Benutzer während der Erkundung und Bewegung im virtuellen 3D-Stadtmodell unterstützen. Weiter werden die Techniken für den Austausch, die Aufbereitung und die Optimierung komplexer 3D-Datensätze beschrieben sowie Potenziale für digitale Museen und Ausstellungen skizziert. Der vorgestellte Ansatz stellt insbesondere eine IT-Lösung für einen vereinfachten, räumlich-kontextintegrierten informellen Wissenszugang zu archäologischer Fachinformation dar.

@inproceedings{TSD11,
  author = { Trapp, Matthias and Semmo, Amir and D{\"o}llner, J{\"u}rgen },
  title = { Colonia3D },
  booktitle = { Tagungsband der 9. Konferenz Kultur und Informatik - Multimediale Systeme },
  year = { 2011 },
  pages = { 201--212 },
  month = { 5 },
  publisher = { Werner H{\"u}lsbusch Verlag }
}

Ansätze zur kartographischen Gestaltung von 3D-Stadtmodellen

Amir Semmo, Matthias, Trapp, and Jürgen Döllner
Proceedings 31. Wissenschaftlich-Technische Jahrestagung der DGPF 2011

Abstract, BibTeX, Paper (PDF)

Interaktive virtuelle 3D-Stadtmodelle haben sich zu einem bewährten Medium für die effektive und effiziente Kommunikation von Geoinformation entwickelt. Sie präsentieren eine spezialisierte Form geovirtueller Umgebungen und sind gekennzeichnet durch ein zugrunde liegendes 3D-Geländemodell, einer darin befindlichen 3D-Bebauung sowie des dazu komplementären Straßen-, Grünflächen- und Naturraumes. 3D-Stadtmodell-Systeme ermöglichen es dem Nutzer, sich im Modell interaktiv zu bewegen und sie stellen die Grundfunktionen für die Exploration, Analyse, Präsentation und das Editieren der raumbezogenen Information bereit. Besonders im Gebiet der kartenähnlichen und kartenverwandten 3D-Darstellungen stellen u.a. automatische Verfahren und Techniken zur Stilisierung und Abstraktion von Objekten eines 3D Stadtmodell ein Hauptproblem für die interaktive 3D-Bildsynthese dar. Hier spielt insbesondere die Abstraktion und Illustration potentiell wichtiger Information und somit die Reduzierung der kognitiven Belastung des Nutzers eine tragende Rolle. Diesbezüglich sind Verfahren und Techniken zur nicht-photorealistischen Bildsynthese ein bewährtes Mittel der Computergrafik, deren direkte Anwendung auf ein komplettes 3D-Stadtmodell jedoch häufig monotone sowie gestalterisch und kartographisch stark eingeschränkte Resultate liefert. Eine effiziente und kontextsensitive Kommunikation von 3D-Geoinformation bedarf jedoch der Kopplung von Objektsemantik und Abstraktionsverfahren. Diese Arbeit präsentiert ein Konzept und dessen Umsetzung, das die Auswahl und Parametrisierung von nicht-photorealistischen Darstellungstechniken auf Basis von Objektsemantiken erlaubt (Abbildung 1). Dies ermöglicht die Zuweisung unterschiedlicher automatischer Abstraktionstechniken zu Objekten und Objektgruppen. Der vorgestellte Ansatz ist echtzeitfähig und erlaubt eine interaktive Klassifikation von Objekten und Features zur Laufzeit, wodurch sich u.a. Szenarien zur interaktiven Exploration von thematisch-stilisierten Features bzw. feature-bezogenen Daten visualisieren lassen. Dieser Ansatz eröffnet Möglichkeiten für eine gezielte und systematische kartographische Gestaltung von 3D-Stadtmodellen sowie deren echtzeitfähige Implementierung durch entsprechende 3D-Visualisierungsdienste.

@INPROCEEDINGS{STD11,
  author = {Semmo, Amir and Trapp, Matthias and J{\"u}rgen D{\"o}llner},
  title = {Ans{\"a}tze zur kartographischen Gestaltung von 3D-Stadtmodellen},
  booktitle = {31. Wissenschaftlich-Technische Jahrestagung der DGPF},
  year = {2011},
  pages = {473--482}
}
2010

Anisotropic Kuwahara Filtering with Polynomial Weighting Functions

Jan Eric Kyprianidis, Amir Semmo, Henry Kang, and Jürgen Döllner
NPAR Poster Session / Proceedings EG UK Theory and Practice of Computer Graphics (TP.CG) 2010

Abstract, BibTeX, Paper (PDF)

In this work we present new weighting functions for the anisotropic Kuwahara filter. The anisotropic Kuwahara filter is an edge-preserving filter that is especially useful for creating stylized abstractions from images or videos. It is based on a generalization of the Kuwahara filter that is adapted to the local shape of features. For the smoothing process, the anisotropic Kuwahara filter uses weighting functions that use convolution in their definition. For an efficient implementation, these weighting functions are therefore usually sampled into a texture map. By contrast, our new weighting functions do not require convolution and can be efficiently computed directly during the filtering in real-time. We show that our approach creates output of similar quality as the original anisotropic Kuwahara filter and present an evaluation scheme to compute the new weighting functions efficiently by using rotational symmetries.

@inproceedings{KSKD10b,
  author = { Kyprianidis, Jan Eric and Semmo, Amir and Kang, Henry and D{\"o}llner, J{\"u}rgen },
  title = { Anisotropic Kuwahara Filtering with Polynomial Weighting Functions },
  booktitle = { Proc. EG UK Theory and Practice of Computer Graphics },
  year = { 2010 },
  pages = { 25--30 },
  month = { 9 }
}

@misc{KSKD10a,
  author = { Kyprianidis, Jan Eric and Semmo, Amir and Kang, Henry and D{\"o}llner, J{\"u}rgen },
  title = { Anisotropic Kuwahara Filtering with Polynomial Weighting Functions },
  booktitle = { NPAR Poster Session },
  month = { 6 },
  year = { 2010 }
}

Automated Image-Based Abstraction of Aerial Images

Amir Semmo, Jan Eric Kyprianidis, and Jürgen Döllner
Proceedings of 13th AGILE International Conference on Geographic Information Science 2010

Abstract, BibTeX, DOI, Paper (PDF)

Aerial images represent a fundamental type of geodata with a broad range of applications in GIS and geovisualization. The perception and cognitive processing of aerial images by the human, however, still is faced with the specific limitations of photorealistic depictions such as low contrast areas, unsharp object borders as well as visual noise. In this paper we present a novel technique to automatically abstract aerial images that enhances visual clarity and generalizes the contents of aerial images to improve their perception and recognition. The technique applies non-photorealistic image processing by smoothing local image regions with low contrast and emphasizing edges in image regions with high contrast. To handle the abstraction of large images, we introduce an image tiling procedure that is optimized for post-processing images on GPUs and avoids visible artifacts across junctions. This is technically achieved by filtering additional connection tiles that overlap the main tiles of the input image. The technique also allows the generation of different levels of abstraction for aerial images by computing a mipmap pyramid, where each of the mipmap levels is filtered with adapted abstraction parameters. These mipmaps can then be used to perform level-of-detail rendering of abstracted aerial images. Finally, the paper contributes a study to aerial image abstraction by analyzing the results of the abstraction process on distinctive visible elements in common aerial image types. In particular, we have identified a high abstraction straction potential in landscape images and a higher benefit from edge enhancement in urban environments.

@incollection{SKD10,
  author = { Semmo, Amir and Kyprianidis, Jan Eric and D{\"o}llner, J{\"u}rgen },
  title = { Automated Image-Based Abstraction of Aerial Images },
  booktitle = { Geospatial Thinking },
  publisher = { Springer },
  year = { 2010 },
  editor = { Painho, Marco and Santos, Maribel Yasmina and Pundt, Hardy },
  series = { Lecture Notes in Geoinformation and Cartography },
  pages = { 359--378 },
  month = { 5 },
  doi = { 10.1007/978-3-642-12326-9_19 }
}

ContextLua: Dynamic Behavioral Variations in Computer Games

Benjamin Hosain Wasty, Amir Semmo, Malte Appeltauer, Bastian Steinert, and Robert Hirschfeld
Proceedings of 2nd International Workshop on Context-Oriented Programming 2010

Abstract, BibTeX, Paper (ACM DL)

Behavioral variations are central to modern computer games as they are making the gameplay a more interesting user experience. However, these variations significantly add to the implementation complexity. We discuss the domain of computer games with respect to dynamic behavioral variations and argue that context-oriented programming is of special interest for this domain. This motivates our extension to the dynamic scripting language Lua, which is frequently used in the development of computer games. Our newly provided programming constructs allow game developers to use layers for defining and activating variations of the basic gameplay.

@inproceedings{WSASH10,
  author = { Hosain Wasty, Benjamin and Semmo, Amir and Appeltauer, Malte and Steinert, Bastian and Hirschfeld, Robert },
  title = { ContextLua: Dynamic Behavioral Variations in Computer Games },
  booktitle = { Proceedings of the 2nd International Workshop on Context-Oriented Programming },
  year = { 2010 },
  pages = { 5:1--5:6 },
  doi = { 10.1145/1930021.1930026 }
}

August 2017

Pictory: Combining Neural Style Transfer and Image Filtering

ACM SIGGRAPH (Appy Hour), Los Angeles, CA, USA

July 2017

Neural Style Transfer: A Paradigm Shift for Image-based Artistic Rendering?

Expressive 2017, Los Angeles, CA, USA

December 2016

Interactive Image Filtering with Multiple Levels-of-Control on Mobile Devices

ACM SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications, Macao

December 2016

BeCasso: Artistic Image Processing and Editing on Mobile Devices

ACM SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications (Demo Session), Macao

July 2016

Interactive Multi-scale Oil Paint Filtering on Mobile Devices

ACM SIGGRAPH (Poster Session), Anaheim, CA, USA

July 2016

BeCasso: Image Stylization by Interactive Oil Paint Filtering on Mobile Devices

ACM SIGGRAPH (Appy Hour), Anaheim, CA, USA

May 2016

Interactive Oil Paint Filtering On Mobile Devices

Expressive 2016 (Poster Session), Lisbon, Portugal

November 2015

Konzepte und Techniken für das kartographische Design von 3D geovirtuellen Umgebungen

SGK-Herbsttagung, Muttenz, Switzerland

August 2015

Cartography-Oriented Design of 3D Geospatial Information Visualization - Overview and Techniques

International Cartographic Conference 2015, Rio de Janeiro, Brazil

June 2015

Image Stylization by Oil Paint Filtering using Color Palettes

Expressive 2015, Istanbul, Turkey

November 2014

An Interaction Framework for Level-of-Abstraction Visualization of 3D Geovirtual Environments

ACM SIGSPATIAL MapInteract 2014, Dallas, TX, USA

August 2014

Image Filtering for Interactive Level-of-Abstraction Visualization of 3D Scenes

Expressive 2014, Vancouver, Canada

September 2013

Evaluating the Perceptual Impact of Rendering Techniques on Thematic Color Mappings in 3D Virtual Environments

International Workshop on Vision, Modeling and Visualization (VMV) 2013, Lugano, Switzerland

July 2013

Real-Time Rendering of Water Surfaces with Cartography-Oriented Design

Expressive 2013, Anaheim, CA, USA

June 2012

Interactive Visualization of Generalized Virtual 3D City Models using Level-of-Abstraction Transitions

Eurographics Conference on Visualization (EuroVis) 2012, Vienna, Austria

June 2012

Cartography-Oriented Visualization of Virtual 3D City Models based on Level-of-Abstraction Transitions

Hasso Plattner Institute / Research School, Potsdam, Germany

May 2012

Colonia3D

Kultur und Informatik Conference, Berlin, Germany

April 2011

Ansaetze zur kartographischen Gestaltung von 3D-Stadtmodellen

31. Wissenschaftlich-Technische Jahrestagung der DGPF, Mainz, Germany

May 2010

Automated Image-Based Abstraction of Aerial Images

AGILE International Conference on Geographic Information Science, Guimaraes, Portugal

2014/15 : Winter Term / Summer Term
  • Geovisualization

    seminar (MA), tutor
  • Introduction to Visualization

    lecture (MA), tutor
2013/14 : Winter Term / Summer Term
  • Geovisualization

    seminar (MA), tutor
  • Image & Video Processing

    seminar (BA), tutor
  • Video Analysis, Abstraction, and Summarization

    project (BA), tutor
2012/13 : Winter Term / Summer Term
  • Computergraphics II

    lecture (BA), tutor & co-lecturer
  • Graphics Programming with OpenGL and C++

    lecture/seminar (BA), tutor & lecturer
2011/12 : Winter Term / Summer Term
  • Real-Time Rendering Techniques

    seminar (MA), tutor
  • Computergraphics I

    lecture (BA), tutor
  • Computergraphics II

    lecture (BA), tutor
Program Committee
Expressive 2017, GeoVIS 2015 (ISPRS Geospatial Week)

Reviewing
ACM SIGGRAPH (2014), IEEE Transactions on Visualization and Computer Graphics (2015), Computers & Graphics (2016), Eurographics (2017), IEEE VIS / SciVis (2017), Eurographics Conference on Visualization / EuroVis (2014), IEEE Pacific Visualization (2013), IEEE MultiMedia (2016), International Journal of Geographical Information Science (2013), The Visual Computer (2012), Graphics Interface (2017), Vision, Modeling and Visualization (2016)

Memberships
ACM, ACM SIGGRAPH, IEEE

Awards & Honors
Best Demo @ SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications 2016, CGF Cover Contest 2016 (Runner-up), Best Paper Award @ Expressive - 2015, Cover Image Selection for Proceedings of EG UK TP.CG - 2013, Cover Image Selection for International Journal of Heritage in the Digital Era (vol. 1, no. 1) - 2012, Hasso Plattner Institute Best Diploma (M.Sc.) - 2011, Best Paper Award @ EuroMed Conference - 2010, Best technical research student paper @ EG UK TP.CG - 2010

Office

amir.semmo@hpi.de (mailto)

(+49) 0331 5509 3909

(+49) 0331 5509 172

Hasso-Plattner-Institut
Prof.-Dr.-Helmert-Strasse 2-3
14482 Potsdam, Germany

Room H-2.26

asemmo (GitHub, Bitbucket, ...)

Drop Me a Line

Send via Formspree.