Publications and papers from and about the See Far project focusing on supporting the ageing workforce with vision deficiencies in Europe.


Deep learning for diabetic retinopathy detection and classification based on fundus images: A review

NikosTsiknakisaDimitrisTheodoropoulosbGeorgiosManikisaEmmanouilKtistakisacOuraniaBoutsoradAlexaBertoeFabioScarpaefAlbertoScarpaeDimitrios I.FotiadisghKostasMariasab

a Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), 70013, Heraklion, Greece
b Department of Electrical and Computer Engineering, Hellenic Mediterranean University, 71004, Heraklion, Greece
c Laboratory of Optics and Vision, School of Medicine, University of Crete, 71003, Heraklion, Greece
d General Hospital of Ioannina, 45445, Ioannina, Greece
e D-Eye Srl, 35131, Padova, Italy
f Department of Information Engineering, University of Padova, 35131, Padova, Italy
g Department of Biomedical Research, Institute of Molecular Biology and Biotechnology, FORTH, 45115, Ioannina, Greece
h Department of Materials Science and Engineering, Unit of Medical Technology and Intelligent Information Systems, University of Ioannina, 45110, Ioannina, Greece

Received 7 May 2021,
Revised 12 June 2021,
Accepted 18 June 2021,
Available online 25 June 2021.
https://doi.org/10.1016/j.compbiomed.2021.104599

Abstract

Diabetic Retinopathy is a retina disease caused by diabetes mellitus and it is the leading cause of blindness globally. Early detection and treatment are necessary in order to delay or avoid vision deterioration and vision loss. To that end, many artificial-intelligence-powered methods have been proposed by the research community for the detection and classification of diabetic retinopathy on fundus retina images. This review article provides a thorough analysis of the use of deep learning methods at the various steps of the diabetic retinopathy detection pipeline based on fundus images. We discuss several aspects of that pipeline, ranging from the datasets that are widely used by the research community, the preprocessing techniques employed and how these accelerate and improve the models’ performance, to the development of such deep learning models for the diagnosis and grading of the disease as well as the localization of the disease’s lesions. We also discuss certain models that have been applied in real clinical settings. Finally, we conclude with some important insights and provide future research directions.

Keywords

Artificial intelligence
Classification
Deep learning
Detection
Diabetic retinopathy
Fundus
Retina
Review
Segmentation

CONCERTED RESPONSE TO THE GREEN PAPER ON AGEING

 See Far project together with three other Horizon 2020 projects: Ageing@WorksustAGECO-ADAPT  and led by SmartWork provided the comments to the EU Green Paper for Ageing that was opened for consultations until 21 April.

response to green paper on ageing

From the response, here are some principles we would suggest to be followed when promoting the technological solutions for “Making the most out of our working lives”:

• The technologies should be designed in such a way that they are based on the needs of the users and take into account their available skills and resources (in terms of financial, material and digital skills).

• As the office workers have a problem to re-enter the labour market because of their lack of digital skills to use technological solutions, adequate, specific and adapted training(s) should be offered.

• More effort should be put to promoting the positive aspects of a longer working experience and reskilling among the older people, including health and wellbeing, financial and social aspects.

• Cross-sectoral action and responsibility should be promoted, where the industry has a crucial role in not only providing but also adapting technologies and tools introduced to their working environment to the needs of older employees.

• The deployment of technologies should take into account the socio-economic aspects, i.e. the availability of the devices and the internet, and their cost, as well as health and digital literacy of the users.


Review of eye tracking metrics involved in emotional and cognitive processes

Abstract

Eye behaviour provides valuable information revealing one’s higher cognitive functions and state of affect. Although eye tracking is gaining ground in the research community, it is not yet a popular approach for the detection of emotional and cognitive states. In this paper, we present a review of eye and pupil tracking related metrics (such as gaze, fixations, saccades, blinks, pupil size variation, etc.) utilized towards the detection of emotional and cognitive processes, focusing on visual attention, emotional arousal and cognitive workload. Besides, we investigate their involvement as well as the computational recognition methods employed for the reliable emotional and cognitive assessment. The publicly available datasets employed in relevant research efforts were collected and their specifications and other pertinent details are described. The multimodal approaches which combine eye-tracking features with other modalities (e.g. biosignals), along with artificial intelligence and machine learning techniques were also surveyed in terms of their recognition/classification accuracy. The limitations, current open research problems and prospective future research directions were discussed for the usage of eye-tracking as the primary sensor modality. This study aims to comprehensively present the most robust and significant eye/pupil metrics based on available literature towards the development of a robust emotional or cognitive computational model.


Concerted paper between partners and projects: THE ROLE OF AI TECHNOLOGIES IN WORKING THROUGH COVID-19 AND ITS AFTERMATH

See Far project, together with other six projects have worked recently on the non-scientific paper regarding the role of AI technologies in working through COVID-19 and its aftermath. This close collaboration was led by

SmartWork project http://www.smartworkproject.eu/

And joint by:

Ageing@Work: https://ageingatwork-project.eu/

BIONIC https://bionic-h2020.eu/

CO-ADAPT https://coadapt-project.eu/

See Far https://www.see-far.eu/

sustAGE https://www.sustage.eu/

WorkingAge https://www.workingage.eu/

See Far project had an active effort and role to elaborate and write about the digital solutions and systems to the Covid – 19 implication in different work environments.

DOWNLOAD THE PAPER

 

THE ROLE OF AI TECHNOLOGIES IN WORKING THROUGH COVID-19 AND ITS AFTERMATH

THE ROLE OF AI TECHNOLOGIES IN WORKING THROUGH COVID-19 AND ITS AFTERMATH

The main aims of the concerted paper are to:

  • reflect and share about the COVID-19 implications to the work environments, now that teleworking turned into a main instrument and necessity for us all;
  • understand how the digital solutions and systems could be developed, adapted, optimized or applied to better respond to the pandemic context challenges.

The contributions collected followed three proposed guiding questions, although the contributing partners were free to explore other relevant aspects, in their understanding:

  • How can technology apply to the work environment be leveraged to respond to the emerging challenges raised by COVID-19?
  • Are there changes to the actual priorities and needs, considering the pandemic situation?
  • Is this an opportunity for projects such as SmartWork to underline the needs to introduce a digital revolution in the workplace?

During the next pages, all the different contributions are aligned. The individual contributions and opinions of SmartWork partners are constituting Chapter 1. Chapter 2 collects the contributions provided by the different projects that were funded under the same call.

Some similarities can be highlighted:

  • The desire to leverage the existing knowledge to rapidly respond to the challenges of this new (even if hopefully temporary) era;
  • The understanding of the challenges ahead and the will to overcome them collectively.

 

The collective first step is already the creation of this common paper. It is surely a positive sign for the future. But also some conclusions can be drawn:

  • COVID-19 impacted the way we live, work and spend free time severely,
  • It has effects on physical and mental health and wellbeing,

AI can play a great role in providing solutions, not only during the emergency but also in the long-term, and not only for the office workers but also for the more traditional industries.

Development of an automated analysis for glaucoma screening of videos acquired with smartphone ophthalmoscope

ARVO Annual Meeting Abstract  |   June 2020
Abstract

Purpose : Widespread screening is crucial for early diagnosis and treatment of glaucoma. The development of smartphone-based ophthalmoscopes represents a resource, since they are low cost and portable solutions able to imaging the optic disc easily. Acquired images have a field of view and a quality lower than conventional fundus cameras images. Therefore, standard image processing algorithms are not designed for this new type of images. Thus, we propose a completely automated analysis of videos acquired with the D-EYE smartphone-based ophthalmoscope, capable to assist ophthalmologists in performing diagnosis. This analysis is going to be integrated and tested in the See Far European Project (#826429).

Methods : 1000 frames were selected from 30 videos (from different healthy subjects) acquired with the D-EYE system. The optic disc was manually segmented. These images (1080×1920 px resized to 512×1024 px) were used as training set for a u-shaped convolutional neural network (U-Net), with 4 blocks for both the encoder and decoder path to segment the optic disc. After training, the U-Net analyzes each frame of a video. Optic disc area and focus are used to select the best frame. Then, the best frame is cropped around the optic disc, resized to 512×512 px, and the cup segmentation is performed by a second U-Net. It was trained on the 750 images of the RIGA dataset, cropped around the optic disc, resized to 512×512 px, and blurred with Gaussian filters so as to have same quality of images from D-EYE system. Finally, the algorithm derives the VCDR (vertical cup-to-disc ratio).

Results : On 5 healthy subjects and 5 subjects with glaucoma, a perfect agreement with manual analysis was obtained in the best frame selection and an accuracy ≥96% was obtained for both disc and cup segmentation. Finally, the computed VCDR denotes a substantial difference between the two groups of subjects: VCDR<0.40 for healthy subjects, VCDR>0.45 for subjects with glaucoma.

Conclusions : We developed a proof-of-concept automated analysis of videos acquired with the D-EYE system (Fig1). The proposed analysis can provide visual and quantitative information that assist ophthalmologists. Indeed, clinicians can look at a single image rather than an entire video and can easily verify the reliability of the automated analysis. Results encourage the further development of the proposed method and its investigation on a large dataset.

This is a 2020 ARVO Annual Meeting abstract.