In this modern age more and more information is created in digital form, and there are an increasing amount of ways to consume this information (TV screens, smartwatches, computers, car systems, etc.) It is therefore vitally important that it should not only be available through different media, but also that you reach more and more people, regardless of their technical prowess, disabilities or means of interacting with technology. It is to address these challenges that several rules and regulations were created, to allow everyone the same basic access to this information that we sometimes take for granted. Achieving this lofty aim though is difficult. Although we have tools to create U/A (Universally Accessible) compliant documents from scratch, converting and validating existing documents is (for now at least) work that requires care and manual inspection, which takes time. In addition, whilst the Universal Accessibility standards may have been created primarily to address the needs of people with disabilities, there are other issues that could be addressed. The ability to collect data can easily outstrip the throughput of data analysis, leading to masses of documents containing unknown information. It is tempting to make a small foray into this world of “dark data” to analyze and re-purpose some of these documents so that their contents can be processed more easily. In this talk, we will demonstrate typical accessibility cases found in the wild that we should try to address, and discuss some existing strategies that can be used to mitigate the migration costs. In the near future we can automate this process to some extent with Machine Learning and Image Recognition tools, making the vast amount of existing digital information (whether it is in HTML, PDF form, or others) available to a much broader audience, and through an increasingly varied set of delivery mechanisms.
Get notified about new features and conference additions.