Consulting scientific database...
We aimed to create an AI based search engine that solely relies on the given .csv data and official resources about experiments that were made about Space Biology. Monitoring and researching through the raw files could be a bit difficult for the end user, so the main aim of this project was to make a web interface to interact with in order to make things simpler for the end user. The name Xenosearch, comes from the word Xenology combined with the word Search. Xenology has the same meaning with Astrobiology. Xenosearch uses AI to answer questions about space biology research by analyzing thousands of scientific publications like publications from PubMed Central. Our system reads pre-scraped research papers to provide accurate, evidence-based answers. In order for a person to get answers, they must ask their question through the search bar. We should note that, the longer the question gets, the longer the loading session gets. After the session, the answer appears inside a block, to ask a new question, simply reload the page and it will turn back to normal. You can also ask for certain article names about the topic. The AI will provide you with the titles of the articles that contain information about the question you have asked it.
Our project is a website with an easy-to-use UI, hosted on our VPS server, designed for searching and asking questions about approved Space Biology experiments. We began by creating a visually appealing and simple-to-browse HTML webpage offline. After styling it with CSS to match the design aesthetic praised by NASA Space Apps, we added functional elements like a toggle menu and a search bar. Next, we needed to build our backend structure, which required a server. We rented a VPS and a domain. The first step was to choose an operating system for our code to run on; we selected Debian 13 Linux as it best fit our needs. We then configured the domain to redirect to our VPS's IP address, making the site accessible. With the server setup complete, we focused on two main tasks. First, we made our frontend webpage accessible online using the NGINX web server. A crucial step in this phase was establishing an encrypted HTTPS connection. We used the Electronic Frontier Foundation's Certbot to handle this. Certbot automatically modified the NGINX configuration file /etc/nginx/sites-available/xenosearch to implement an SSL certificate. The second part of our development was the backend software. We installed and configured Ollama, a service that allows us to run LLM models locally. We chose the deepseek-r1 model for its memory efficiency and small size. To ensure the model could answer questions based on our specific resources, we created a Python script to scrape data from a .csv file. This script downloads all the content as .txt files and stores them in a directory. This step was necessary because local AI models cannot access web links directly and require plain text to process. Finally, we created a Python application (using the Flask framework) and a JavaScript search script. These work together to convert user inputs from the HTML page into queries for the AI and then display the answers on our site. Since Flask applications require a command to run, we created a systemd service xenosearch.service to ensure it starts automatically at boot. This service also manages the required Flask session. You can test the web app by visiting: www.xenosearch.org (We recommend using a computer for the best experience, but mobile is also fine.)