- Image Browsing VR Interface
Supporting Spatial Memory for Image Browsing in VR
Image browsing in virtual environments is becoming a common task. Similar to conventional 2D displays, current VR image browsing interfaces use scrolling to navigate larger collections of images. This form of interaction however, leads to long exploration times requiring extra effort for both retrieving and recalling targets. In this project, we focus on how large image collection browsing can be significantly improved by leveraging affordances available in VR.
- Egocentric Distance Perception
Human beings are difficult to perceive and estimate the far distance(over 30m) in the real world, while the egocentric distance is underestimated in the virtual environments. In this project, the focus is to better understand how to utilize additional visual cues to assist users in perceiving and estimating the egocentric distance, so that users could achieve better interaction performance in virtual environments.
- Pseudo-Weight Perception
Haptic illusion (To Be Updated)
- Auditory Feedback
Use of Sound to Provide Occluded Visual Information in Touch Gestural Interface Direct touch gestures are getting popular as an input modality for mobile and tabletop interaction. However, the finger touch interface is considered as not accurate compared with pen-based interface. One of the main reasons is that the visual feedback of the finger touch is occluded because of the size of fingertip. It has made difficult for perceiving and correcting errors. We propose to utilize another modality to provide information on occluded area. Spatial information on visual channel is transformed to temporal and frequency information on another modality. We use sound modality to illustrate the proposed transmodality. Results show that performance with additional modality is better for drawing where the visual information is important than only with the visual feedback.
- Sketch Based Interface for Contents Generation
We present a new feature for interactive modeling 3D objects with sketch based interface. Most of objects are composed of multiply components. The single component is generated with three strokes, which contain two opened silhouettes and a closed cross sectional. We defined three variables to represent the three strokes, by which reconstruct surface. The user may create relative complex objects which consist of multiply components with iterative way.
We present a sketch-based interface for editing 3D objects. A sketch based modeling method is presented which allows users to draw 3D shape with only three strokes: left and right sides of silhouettes and a freeform cross section. The generated object can be edited by interpreting user’s multiple strokes. The single stroke is not enough for users to meet their intentions. The key idea is to edit the shape with multiple strokes progressively. We defined several stroke vocabularies to deform the surfaces, for example, global or local volume fatting/thinning, making the radius length of cross section longer or shorter. With the proposed interface, there is no need for expert knowledge in 3D modeling. And users can edit 3D models which represent user’s design concepts and intention intuitively. Our system can be used in the stage of conceptual design by novices for 3D sketching.
We present a sketch based interface for animation design system which allows user to draw 2D sketches that represent the 3D objects and animate the generated objects by drawing strokes which we define. The meaning of the strokes which we define will represent the corresponding animations. We employ the linear and cubic interpolation based method to create animation. Users only need to draw the 2D sketches and the motion path, and then the 3D animation is created automatically. Furthermore, the velocity of movement can be represented by the speed of stroke drawing. The prototype can be applied to the field of education, in order to express the idea intuitively with animation.
One of the key technology of character animation editing is not only to achieve as-rigid-as-possible shape deformation, but also real-time operations. Computing in the shape manipulation, by defining the square difference metric of vertex neighborhood and face neighborhood on mesh, simplify coordinates separation of free and constrain vertex. This algorithm can solve x and y coordinates independently in general fitting. In the implementing, the design of appropriate the chain structure of coefficient matrix and constant vector, duplication of data computing are reduced by indexed storage of sparse matrix and conjugate gradient method. Experiments show that in the ordinary personal computer to about 1,000 mesh vertexes can are deformed in real-time interaction.