Project Description

The outcome of this project will be a static web page that can be used to diagnose different diseases based on a Chest X-Ray.

Goal for this Hackweek

The goal is to learn TensorflowJS and to put on practice what I have learn in week 1 of https://www.coursera.org/learn/ai-for-medical-diagnosis.

Resources

  • https://www.coursera.org/learn/ai-for-medical-diagnosis
  • https://nihcc.app.box.com/v/ChestXray-NIHCC
  • https://observablehq.com/@tvirot/machine-learning-in-javascript-with-tensorflow-js-part-ii
  • https://www.tensorflow.org/js/tutorials/conversion/import_keras

Looking for hackers with the skills:

javascript tensorflow machinelearning

This project is part of:

Hack Week 21

Activity

  • over 2 years ago: jordimassaguerpla added keyword "javascript" to this project.
  • over 2 years ago: jordimassaguerpla added keyword "tensorflow" to this project.
  • over 2 years ago: jordimassaguerpla added keyword "machinelearning" to this project.
  • over 2 years ago: jordimassaguerpla started this project.
  • over 2 years ago: jordimassaguerpla originated this project.

  • Comments

    • jordimassaguerpla
      over 2 years ago by jordimassaguerpla | Reply

      code in https://github.com/jordimassaguerpla/susehackweek2022 results in https://github.com/jordimassaguerpla/susehackweek2022#results

    Similar Projects

    Try to render Agama in a TUI browser by ancorgs

    Description

    Agama is a new Linux installer that will be very likely used for SLES 16. It offers a modern and convenient web interface that can be executed both locally and remotely.

    But of course some users will miss the old TUI (ncurses) interface of the YaST installer.

    So I want to experiment whether would it be possible to render a simplified version of the web interface for TUI browsers. That's only doable and maintainable if we keep the current technology stack we use for rendering the full-blown page, simply replacing complicated UI elements with others that are easy to render. That means the browser would need to support Javascript.

    Chawan seems to be almost there regarding support for Javascript, XHR and related technologies. But according to this conversation, the next missing piece would be to support recursive import of module script tags.

    Unfortunately, Chawan is written in Nim and I'm pretty sure a week is not enough time for me to learn Nim, implement the feature at Chawan and then fix whatever is the next obstacle on the Agama side.

    But if someone could take care of the Nim part, I would do the same with the Agama one. So this is basically a call for help to get this project even started.


    Design the new UI for storage configuration at Agama by ancorgs

    Description

    We are in the process of re-designing the web user interface to configure storage at Agama. We expected to have a clear idea of what we wanted before starting Hack Week. But the idea is still not that clear. So I will use use my Hack Week time to try several prototypes since I really want this to be done.

    Goals

    Have a prototype using Patternfly components and addressing all the use-cases we want to cover. Easy for the easy cases. Capable for the complex ones.


    obs-service-vendor_node_modules by cdimonaco

    Description

    When building a javascript package for obs, one option is to use https://github.com/openSUSE/obs-service-node_modules as source service to get the project npm dependencies available for package bulding.

    obs-service-vendornodemodules aims to be a source service that vendors npm dependencies, installing them with npm install (optionally only production ones) and then creating a tar package of the installed dependencies.

    The tar will be used as source in the package building definitions.

    Goals

    • Create an obs service package that vendors the npm dependencies as tar archive.
    • Maybe add some macros to unpack the vendor package in the specfiles

    Resources


    Agama installer on-line demo by lslezak

    Description

    The Agama installer provides a quite complex user interface. We have some screenshots on the web page but as it is basically a web application it would be nice to have some on-line demo where users could click and check it live.

    The problem is that the Agama server directly accesses the hardware (storage probing) and loads installation repositories. We cannot easily mock this in the on-line demo so the easiest way is to have just a read-only demo. You could explore the configuration options but you could not change anything, all changes would be ignored.

    The read-only demo would be a bit limited but I still think it would be useful for potential users get the feeling of the new Agama installer and get familiar with it before using in a real installation.

    As a proof of concept I already created this on-line demo.

    The implementation basically builds Agama in two modes - recording mode where it saves all REST API responses and replay mode where it for the REST API requests returns the previously recorded responses. Recording in the browser is inconvenient and error prone, there should be some scripting instead (see below).

    Goals

    • Create an Agama on-line demo which can be easily tested by users
    • The Agama installer is still in alpha phase and in active development, the online demo needs to be easily rebuilt with the latest Agama version
    • Ideally there should be some automation so the demo page is rebuilt automatically without any developer interactions (once a day or week?)

    TODO

    • Use OpenAPI to get all Agama REST API endpoints, write a script which queries all the endpoints automatically and saves the collected data to a file (see this related PR).
    • Write a script for starting an Agama VM (use libvirt/qemu?), the script should ensure we always use the same virtual HW so if we need to dump the latest REST API state we get the same (or very similar data). This should ensure the demo page does not change much regarding the storage proposal etc...
    • Fix changing the product, currently it gets stuck after clicking the "Select" button.
    • Move the mocking data (the recorded REST API responses) outside the Agama sources, it's too big and will be probably often updated. To avoid messing the history keep it in a separate GitHub repository
    • Allow changing the UI language
    • Display some note (watermark) in the page so it is clear it is a read-only demo (probably with some version or build date to know how old it is)
    • Automation for building new demo page from the latest sources. There should be some check which ensures the recorded data still matches the OpenAPI specification.

    Changing the UI language

    This will be quite tricky because selecting the proper translation file is done on the server side. We would probably need to completely re-implement the logic in the browser side and adapt the server for that.

    Also some REST API responses contain translated texts (storage proposal, pattern names in software). We would need to query the respective endpoints in all supported languages and return the correct response in runtime according to the currently selected language.

    Resources


    Editor mode at Agama web interface by ancorgs

    Description

    Agama is a new Linux installer that will be very likely used for SLES 16.

    It takes a configuration (in JSON format) as input. And offers several interfaces to build that configuration in an easy and interactive way.

    I was considering the possibility to add to the web interface a "text editor" mode similar to the XML editor available at virt-manager. That could be used to see how the changes in the UI translate into changes on the configuration.

    Goals

    • Refresh my knowledge about UI development for Agama, since there was a major overhaul recently (adopting TanStack Query) and I need to learn the new way to do things.
    • Please hackers who always want to know how things work internally. :-)


    Make more sense of openQA test results using AI by livdywan

    Description

    AI has the potential to help with something many of us spend a lot of time doing which is making sense of openQA logs when a job fails.

    User Story

    Allison Average has a puzzled look on their face while staring at log files that seem to make little sense. Is this a known issue, something completely new or maybe related to infrastructure changes?

    Goals

    • Leverage a chat interface to help Allison
    • Create a model from scratch based on data from openQA
    • Proof of concept for automated analysis of openQA test results

    Bonus

    • Use AI to suggest solutions to merge conflicts
      • This would need a merge conflict editor that can suggest solving the conflict
    • Use image recognition for needles

    Resources

    Timeline

    Day 1

    • Conversing with open-webui to teach me how to create a model based on openQA test results

    Day 2

    Highlights

    • I briefly tested compared models to see if they would make me more productive. Between llama, gemma and mistral there was no amazing difference in the results for my case.
    • Convincing the chat interface to produce code specific to my use case required very explicit instructions.
    • Asking for advice on how to use open-webui itself better was frustratingly unfruitful both in trivial and more advanced regards.
    • Documentation on source materials used by LLM's and tools for this purpose seems virtually non-existent - specifically if a logo can be generated based on particular licenses

    Outcomes

    • Chat interface-supported development is providing good starting points and open-webui being open source is more flexible than Gemini. Although currently some fancy features such as grounding and generated podcasts are missing.
    • Allison still has to be very experienced with openQA to use a chat interface for test review. Publicly available system prompts would make that easier, though.


    FamilyTrip Planner: A Personalized Travel Planning Platform for Families by pherranz

    Description

    FamilyTrip Planner is an innovative travel planning application designed to optimize travel experiences for families with children. By integrating APIs for flights, accommodations, and local activities, the app generates complete itineraries tailored to each family’s unique interests and needs. Recommendations are based on customizable parameters such as destination, trip duration, children’s ages, and personal preferences. FamilyTrip Planner not only simplifies the travel planning process but also offers a comprehensive, personalized experience for families.

    Goals

    This project aims to: - Create a user-friendly platform that assists families in planning complete trips, from flight and accommodation options to recommended family-friendly activities. - Provide intelligent, personalized travel itineraries using artificial intelligence to enhance travel enjoyment and minimize time and cost. - Serve as an educational project for exploring Go programming and artificial intelligence, with the goal of building proficiency in both.

    Resources

    To develop FamilyTrip Planner, the project will leverage: - APIs such as Skyscanner, Google Places, and TripAdvisor to source real-time information on flights, accommodations, and activities. - Go programming language to manage data integration, API connections, and backend development. - Basic machine learning libraries to implement AI-driven itinerary suggestions tailored to family needs and preferences.