Marco Tasca

Data Scientist

View CV
Scroll to begin

Hi! I am Marco, a passionate Data Scientist and Computational Neuroscientist at the beginning of my career, with a strong background in machine learning and neural networks.

My current passion is investigating how the brain works, and how we can use this knowledge to create more efficient and powerful AI systems.


"It's not the place, but the people."

Professional Experience

My career journey and accomplishments

Undergraduate Researcher

FMI Basel • 2025 - present

Conducted research in computational neuroscience, In the beautiful frame of Novartis Campus, focusing on efficiency in the field of Spiking Neural Networks. Collaborated with the amazing Zenke Lab to advance brain-inspired AI algorithms.

Here, with the guidance of Friedemann Zenke and Julia Gygax, along with the precious feedbacks from the rest of the Lab, I developed a novel neuron model (Hard Reset), which retains the same accuracy of LIF neuron, while improving computational efficiency (EFLOPs [1] ). Moreover I developed and tested a new fluctuation driven initialization [2] , to improve convergence speed and stability of HR neurons during training.

Novartis Campus FMI

Undergraduate Researcher

SMILIES Lab & Links Foundation • 2023 - 2024

This section describes the undergraduate research I conducted for my master's thesis, which was part of the European B-cratos project.

My thesis work was carried out in close collaboration between the SMILIES Lab (DAUIN, Politecnico di Torino), led by Stefano Di Carlo and Alessandro Savino, and Links Foundation, a research foundation created through a partnership between Politecnico di Torino and Fondazione Compagnia di San Paolo.

The thesis was supervised by Alessandro Carpegna (PhD, SMILIES) and senior researcher Paolo Viviani (Links Foundation). I personally thank both my supervisors and all the lab from SMILIES, for their support and invaluable guidance throughout the research process.

Links SMILIES Lab B-Cratos

Data Scientist Intern

Intesa SpA • 2022 - 2023

Started as curricular internship and continued afterward. Worked on multiple projects, in the beautiful frame of OGR-tech, leading the testing and deployment of one of the first RAG at an industrial level, developing machine learning models to improve clients segmentation and targeting.

Created pipelines and database systems for multiformat and multisource data collection and preprocessing. Collaborated with cross-functional teams to integrate data-driven insights into product development and decision-making processes, reports and KPIs.

You can read a little more about the amazing team of AI R&D I collaborated with, and the project I am most proud of by clicking on the little tabs.

OGR-Tech Intesa

Education

My academic background and qualifications

M.Sc. Data Science and Engineering

Politecnico di Torino • 2020 - 2024

The language of the program was English. Studied intelligent systems, data science, and machine learning with a focus on practical applications. Explored graph theory, deep learning, and statistical modeling. Gained hands-on experience with big data tools and brain-inspired AI (Spiking Neural Networks). Learned from inspiring professors, collaborating with fellow students, and working on real-world projects.

B.Sc. Software Engineering

Politecnico di Torino • 2015 - 2020

Equipped from silicon to code. Trained in hardware (Architecture/Assembly), mathematical modeling (Physics/Analysis/Control), and algorithmic problem-solving (C/Java/Graphs). Secured with network/security fundamentals.

PoliTo Academic Transcript

Technical Skills

My toolbox for research, data, and development

marco_skills.py
1# Marco's Technical Arsenal - Powered by curiosity and caffeine ☕
2
3class DataScientist:
4    def __init__(self):
5        # Programming Languages
6        self.languages = {
7            "Python": "ML, data science, research, scripting",
8            "Java": "backend, algorithms, software engineering",
9            "C": "hardware, low-level, performance",
10            "JavaScript": "web, dashboards, interactivity",
11            "R": "statistics, data visualization",
12            "MATLAB": "simulation, prototyping"
13        }
14
15        # Python Libraries & ML Frameworks
16        self.ml_frameworks = [
17            "PyTorch", "Keras", "TensorFlow",  # Deep learning powerhouses
18            "Snntorch", "Stork",  # Spiking neural networks
19            "Scikit-Learn",  # Classic ML algorithms
20            "Pandas", "NumPy", "SciPy",  # Data manipulation magic
21            "Streamlit"  # Interactive data apps
22        ]
23
24        # Big Data & Databases
25        self.big_data_tools = {
26            "distributed_computing": ["Hadoop", "Spark", "MapReduce"],
27            "databases": {
28                "SQL": "MySQL",
29                "NoSQL": ["MongoDB", "Elasticsearch"]
30            }
31        }
32
33    def statistical_analysis(self):
34        return {
35            "bayesian_methods": ["MCMC", "inference", "uncertainty"],
36            "network_analysis": ["graph_theory", "random_walks", "epidemics"]
37        }
38
39    def devops_mlops(self):
40        # Version control & collaboration
41        version_control = ["Git", "GitHub"]
42        experiment_tracking = ["MLflow", "Weights & Biases"]
43        return {"collaboration": version_control, "reproducibility": experiment_tracking}
44
45    def other_superpowers(self):
46        return {
47            "documentation": ["LaTeX", "Markdown"],
48            "deployment": ["Linux", "Docker", "Kubernetes", "Singularity"],
49            "vibe_coding": "Prompting LLMs - addictive but questionably useful 🤖"
50        }
51
52# Instantiate the data scientist
53marco = DataScientist()
54
55# Where & Why: These tools power research, projects, and collaborations
56# From building SNNs and decoding neural signals, to deploying ML models
57# and creating interactive dashboards. Always with efficiency, reproducibility,
58# and a bit of fun! 🧠⚡🚀