A full list of publications is available in Publication page.

Featured Research Topics

Past Work in Osaka University:

Past Work in NAIST:

Plant Structure Modeling

We propose a method for inferring three-dimensional (3D) plant branch structures that are hidden under leaves from multi-view observations. Unlike previous geometric approaches that heavily rely on the visibility of the branches or use parametric branching models, our method makes statistical inferences of branch structures in a probabilistic framework. By inferring the probability of branch existence using a Bayesian extension of image-to-image translation applied to each of multi-view images, our method generates a probabilistic plant 3D model, which represents the 3D branching pattern that cannot be directly observed. Experiments demonstrate the usefulness of the proposed approach in generating convincing branch structures in comparison to prior approaches.

  1. Fumio Okura:
    "3D modeling and reconstruction of plants and trees: A cross-cutting review across computer graphics, vision, and plant phenotyping"
    Breeding Science, Vol. 72, Issue 1, pp. 31-47, Feb 2022. (Invited review)
    (Open access paper)
  2. Takuma Doi, Fumio Okura, Toshiki Nagahara, Yasuyuki Matsushita, Yasushi Yagi:
    "Descriptor-free multi-view region matching for instance-wise 3D reconstruction"
    Proc. Asian Conf. on Computer Vision (ACCV'20), (oral, acceptance rate: 8%), Dec 2020.
    (CVF open access) (arXiv)
  3. *Takahiro Isokane, *Fumio Okura, Ayaka Ide, Yasuyuki Matsushita, Yasushi Yagi:
    "Probabilistic plant modeling via multi-view image-to-image translation"
    Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR'18), pp. 2906-2915, Jun 2018.
    (Project page)

CV Applications for Plants

  1. Satoru Tsugawa, Kaname Teratsuji, Fumio Okura, Koji Noshita, Masaki Tateno, Jingyao Zhang, Taku Demura:
    "Exploring the mechanical and morphological rationality of tree branch structure based on 3D point cloud analysis and the finite element method"
    Scientific Reports, Vol. 12, Article No 4054, Mar 2022. (2020 Impact Factor: 4.380)
    (Open access paper)
  2. Yosuke Toda, Fumio Okura, Jun Ito, Satoshi Okada, Toshinori Kinoshita, Hiroyuki Tsuji, Daisuke Saisho:
    "Training instance segmentation neural network with synthetic datasets for crop seed phenotyping"
    Communications Biology, Vol. 3, Article 173, Apr 2020. (2020 Impact Factor: 6.268)
    (Open access paper)
  3. Yosuke Toda, Fumio Okura:
    "How convolutional neural networks diagnose plant disease"
    Plant Phenomics (a Science Partner Journal), Article ID 9237136, 14 pages, Mar 2019. (Top visited article of 2019)
    (Open access paper)

Dual Task Gait

The performance of dual task, simultaneously performing two tasks, is a useful measure of a person's cognitive abilities because it creates a heavier load on the brain than single tasks. Large-scale datasets of dual-task behavior are required to quantitatively analyze the relationships among dual-task performance, cognitive functions, and personal attributes such as age. We developed an automatic data collection system for dual-task behavior that can be installed in public spaces or facilities. The system is designed as an entertainment kiosk to attract participants. We used the system to collect a large-scale dataset consisting of more than 70,000 sessions of dual-task behavior, in conjunction with a long-running exhibition in a science museum. The resultant dataset, which includes sensor data such as RGB-D image sequences, can be used for learning- and vision-based investigations of human cognitive functions.

Dual task gait analysis
  1. Shuqiong Wu, Taku Matsuura, Fumio Okura, Yasushi Makihara, Chengju Zhou, Kota Aoki, Ikuhisa Mitsugami, Yasushi Yagi:
    "Detecting lower MMSE scores in older adults using cross-trial features from a dual-task with gait and arithmetic"
    IEEE Access, Vol. 9, pp. 150268-150282, Nov 2021. (2020 Impact Factor: 3.367)
    (Open access paper)
  2. Taku Matsuura, Kazuhiro Sakashita, Andrey Grushnikov, Fumio Okura, Ikuhisa Mitsugami, Yasushi Yagi:
    "Statistical analysis of dual-task gait characteristics for cognitive score estimation"
    Scientific Reports, Vol. 9, Article 19927, Dec 2019. (2018 Impact Factor: 4.011)
    (Open access paper)
  3. Kota Aoki, Trung Thanh Ngo, Ikuhisa Mitsugami, Fumio Okura, Masataka Niwa, Yasushi Makihara, Yasushi Yagi, Hiroaki Kazui:
    "Early detection of lower MMSE scores in elderly based on dual-task gait"
    IEEE Access, Vol. 7, pp. 40085-40094, Mar 2019. (2018 Impact Factor: 4.098)
    (Open access paper)
  4. Chengju Zhou, Ikuhisa Mitsugami, Fumio Okura, Kota Aoki, Yasushi Yagi:
    "Growth assessment of school-age children from dual-task observation"
    ITE Trans. on Media Technology and Applications, Vol. 6, No. 4, pp. 286-296, Oct 2018.
    (Open access paper)
  5. Fumio Okura, Ikuhisa Mitsugami, Masataka Niwa, Kota Aoki, Chengju Zhou, Yasushi Yagi:
    "Automatic collection of dual-task human behavior for analysis of cognitive function"
    ITE Trans. on Media Technology and Applications, Vol. 6, No. 2, pp. 138-150, Apr 2018.
    (Open access paper)

Cow Gait Analysis

The growth of computer vision technology can enable the automatic assessment of dairy cow health, for instance, the detection of lameness. To monitor the health condition of each cow, it is necessary to identify individual cows automatically. Tags using microchips, which are attached to the cow's body, have been employed for the automatic identification of cows. However, tagging requires a substantial amount of effort from dairy farmers as well as induces stress on the cows because of the body-mounted devices. A method for cow identification based on three-dimensional video analysis using RGB-D cameras, which capture images with RGB color information as well subject distance from the camera, is proposed. Cameras are mostly maintenance-free, do not contact the cow's body, and have high compatibility with existing vision-based health monitoring systems. Using RGB-D videos of walking cows, a unified approach using two complementary features for identification, gait (i.e., walking style) and texture (i.e., markings), is developed.

  1. Fumio Okura, Saya Ikuma, Yasushi Makihara, Daigo Muramatsu, Ken Nakada, Yasushi Yagi:
    "RGB-D video-based individual identification of dairy cows using gait and texture analyses"
    Computers and Electronics in Agriculture, Vol. 165, Article 104944, Oct 2019. (2018 Impact Factor: 3.171)
    (Preprint:pdf)
    (The final publication is available at https://doi.org/10.1016/j.compag.2019.104944)

Human 3D Modeling

Estimation of naked human shape is essential in several applications such as virtual try-on. We propose an approach that estimates naked human 3D pose and shape, including non-skeletal shape information such as musculature and fat distribution, from a single RGB image. The proposed approach optimizes a parametric 3D human model using person silhouettes with clothing category, and statistical displacement models between clothed and naked body shapes associated with each clothing category. Experiments demonstrate that our approach estimates human shape more accurately than a prior method.

  1. *Yui Shigeki, *Fumio Okura, Ikuhisa Mitsugami, Yasushi Yagi:
    "Estimating 3D human shape under clothing from a single RGB image"
    IPSJ Trans. on Computer Vision and Applications, Vol. 10, No. 16, pp. 1-6, Dec 2018.
    (Open access paper)

Unifying Color and Texture Transfer for Season Transfer

Recent color transfer methods use local information to learn the transformation from a source to an exemplar image, and then transfer this appearance change to a target image. These solutions achieve very successful results for general mood changes, e.g., changing the appearance of an image from ''sunny'' to ''overcast''. However, such methods have a hard time creating new image content, such as leaves on a bare tree. Texture transfer, on the other hand, can synthesize such content but tends to destroy image structure. We propose the first algorithm that unifies color and texture transfer, outperforming both by leveraging their respective strengths. A key novelty in our approach resides in teasing apart appearance changes that can be modelled simply as changes in color versus those that require new image content to be generated. Our method starts with an analysis phase which evaluates the success of color transfer by comparing the exemplar with the source. This analysis then drives a selective, iterative texture transfer algorithm that simultaneously predicts the success of color transfer on the target and synthesizes new content where needed. We demonstrate our unified algorithm by transferring large temporal changes between photographs, such as change of season - e.g., leaves on bare trees or piles of snow on a street - and flooding.

Unified Color and Texture Transfer for Season Transfer
  1. Fumio Okura, Kenneth Vanhoey, Adrien Bousseau, Alexei A. Efros, George Drettakis:
    "Unifying color and texture transfer for predictive appearance manipulation"
    Computer Graphics Forum (Proc. Eurographics Symposium on Rendering), Vol. 34, Issue 4, pp. 53-63, Jun 2015.
    (Low resolution preprint:pdf, 7MB) (Full resolution preprint:pdf, 96MB)
    (Additional results)

Inconsistency Issues in Indirect Augmented Reality

Indirect augmented reality (IAR) employs a unique approach to achieve high-quality synthesis of the real world and the virtual world, unlike traditional augmented reality (AR), which superimposes virtual objects in real time. IAR uses pre-captured omnidirectional images and offline superimposition of virtual objects for achieving jitter- and drift-free geometric registration as well as high-quality photometric registration. However, one drawback of IAR is the inconsistency between the real world and the pre-captured image. In this paper, we present a new classification of IAR inconsistencies and analyze the effect of these inconsistencies on the IAR experience. Accordingly, we propose a novel IAR system that reflects real-world illumination changes by selecting an appropriate image from among multiple pre-captured images obtained under various illumination conditions. The results of experiments conducted at an actual historical site show that the consideration of real-world illumination changes improves the realism of the IAR experience.

Inconsistency Issues in Indirect Augmented Reality
  1. Fumio Okura, Takayuki Akaguma, Tomokazu Sato, Naokazu Yokoya:
    "Addressing temporal inconsistency in indirect augmented reality"
    Multimedia Tools and Applications, Vol. 76, Issue 2, pp. 2671-2695, Jan 2017. (2014 Impact Factor: 1.346)
    (Low resolution preprint:pdf, 4MB) (Full resolution preprint:pdf, 29MB)

Image-Based Rendering for Mixed Reality World Exploration

This study proposes a framework for photorealistic synthesis of virtual objects and virtualized real-world. We combine the offline rendering of virtual objects and image-based rendering (IBR) to take advantage of the high quality of offline rendering without the computational cost of online CG rendering; i.e., it incurs only the cost of the online computation for IBR. Our IBR implementation reduces the computational costs required to online process by generating structured viewpoints (e.g., at every grid point).

  1. Fumio Okura, Masayuki Kanbara, Naokazu Yokoya:
    "Mixed-reality world exploration using image-based rendering"
    ACM Journal on Computing and Cultural Heritage, Vol. 8, Issue 2, Article No. 9, Mar 2015.
    (Preprint:pdf)
  2. 大倉 史生, 神原 誠之, 横矢 直和:
    "事前レンダリング画像群を用いた自由視点画像生成に基づく写実的な拡張現実画像合成",
    日本バーチャルリアリティ学会研究報告, Vol. 18, No. CS-3, pp. 11-16, Sep 2013. (SIG-MR Award)

Free-Viewpoint Mobile Robot Teleoperation Interface

This study proposes a teleoperation interface where an operator can control a robot from freely configured viewpoints using realistic images of the physical world. The viewpoints generated by the proposed interface provide human operators with intuitive control using a head-mounted display and head tracker, and assist them to grasp the environment surrounding the robot. A state-of-the-art free-viewpoint image generation technique is employed to generate the scene presented to the operator. In addition, an augmented reality technique is used to superimpose a 3D model of the robot onto the generated scenes.

  1. Fumio Okura, Yuko Ueda, Tomokazu Sato, Naokazu Yokoya:
    "Teleoperation of mobile robots by generating augmented free-viewpoint images",
    Proc. 2013 IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems (IROS'13), pp. 665-671, Nov 2013. (Paper:pdf)
  2. 上田 優子, 大倉 史生, 佐藤 智和, 横矢 直和:
    "拡張自由視点画像生成を用いた遠隔移動ロボット操縦インタフェース",
    電子情報通信学会 技術研究報告, MVE2012-73, Jan 2013.

Full Spherical HDR Imaging

This study proposes a method for acquiring full spherical high dynamic range (HDR) images without any missing areas by using two omnidirectional cameras mounted on the top and bottom of an unmanned airship. The full spherical HDR images are generated by combining multiple omnidirectional images that are captured with different shutter speeds. The images generated are intended for uses in immersive panorama and its augmentation with image-based lighting.

  1. Fumio Okura, Masayuki Kanbara, Naokazu Yokoya:
    "Aerial full spherical HDR imaging and display"
    Virtual Reality (Springer), Vol. 18, No. 4, pp. 255-269, Nov 2014.
    (Preprint:pdf) / (Video:youtube) / (Video:wmv,11MB)
  2. 大倉 史生, 神原 誠之, 横矢 直和:
    "無人飛行船に搭載された2台の全方位カメラを用いた不可視領域のない全天球HDRビデオの生成"
    日本バーチャルリアリティ学会論文誌, Vol. 17, No. 3, pp. 139-149, Sep 2012. (Paper:pdf) (in Japanese)

Tone Mapping using Region Segmentation

We propose a tone mapping method particularly for HDR images which have two spatially separated luminance distributions of bright and dark regions. We assume that human does not feel a sense of discomfort, even if luminance values between bright and dark regions is reversed, when these regions are definitely divided according to dimidiated luminance and spatial distributions. Under this assumption, we divide an HDR image into bright and dark regions and apply a different tone mapping function to each region independently.

Tone mapping for dimidiate-luminance HDR images
  1. Masaki Kitaura, Fumio Okura, Masayuki Kanbara, Naokazu Yokoya:
    "Tone mapping for HDR images with dimidiate luminance and spatial distributions of bright and dark regions",
    Proc. SPIE Electronic Imaging, Vol. 8292, pp. 829205-01-829205-11, Jan 2012. (Paper:pdf)

Augmented Immersive Panoramas

We developed an augmented immersive panorama system which enables virtual tourism beyond time and space, where immersive panorama is a display method of omnidirectional panoramic images that enables us to look around from a location, like Google Street View. Our application provides a user with both the views of a remote location and related information using augmented reality techniques. This study deals with the geometric and photometric registration problems to generate high-quality augmented omnidirectional videos automatically. The user can look around the scene from the sky above Heijo Palace Site which is an ancient capital in Nara, Japan.

  1. 大倉 史生, 神原 誠之, 横矢 直和:
    "無人飛行船からの空撮全方位動画像を用いた蓄積再生型拡張テレプレゼンス",
    日本バーチャルリアリティ学会論文誌 (Trans. Virtual Reality Society of Japan), Vol. 16, No. 2, pp. 127-138, Jun 2011. (VRSJ Outstanding Paper Award)
    (Paper:pdf) (in Japanese)
  2. Fumio Okura, Masayuki Kanbara, Naokazu Yokoya:
    "Fly-through Heijo Palace Site: Historical tourism system using augmented telepresence",
    Proc. ACM Multimedia (MM'12) Technical Demo, pp. 1283-1284, Oct 2012.
    (Abstruct:pdf) / (Movie:youtube) / (Movie:wmv,11MB)
  3. Fumio Okura, Masayuki Kanbara, Naokazu Yokoya:
    "Fly-through Heijo Palace Site: augmented telepresence using aerial omnidirectional videos",
    Proc. ACM SIGGRAPH'11 Posters, Aug 2011.
    (Abstruct:pdf) / (Poster:pdf,2MB)

Autopilot Aerial Omnidirectional Imaging

An omnidirectional multi-camera system (OMS) mounted on an unmanned airship captures aerial omnidirectional videos suitable for telepresence, augmented/mixed reality, and urban reconstruction. We developed a simple autopilot aerial imaging system.

Unmanned airship
  1. Fumio Okura, Masayuki Kanbara, Naokazu Yokoya:
    "Augmented telepresence using autopilot airship and omni-directional camera",
    Proc. 9th IEEE Int'l Symp. on Mixed and Augmented Reality (ISMAR'10), pp. 259-260, Oct 2010. (Paper:pdf)
  2. 大倉 史生, 神原 誠之, 横矢 直和:
    "空撮画像を用いた拡張テレプレゼンス ~無人飛行船の自動操縦と全方位カメラによるARシステム~",
    画像の認識・理解シンポジウム(MIRU2010)講演論文集, pp. 1183-1189, Jul 2010.