• Assistant Professor, Umeå University
  • Senior Member, ADSLab
  • Leading Cyber Analytics and Learning Group
  • Member, Umeå AI

  • Postdocs

    We are looking for a postdoctoral researcher to work on secure federated learning. Very competitive salary. A tentative start date is by agreement. The opening is for 2 years, and possibly renewable for more years (subject to the availability of funding). Candidates with a strong background in machine learning, distributed systems, security and privacy, anomaly detection, adversarial federated learning are encouraged to apply. In particular, an ideal candidate would have had previous experience exploring the "design and implementation of secure federated learning algorithms" either by improvement or new algorithms. Also, it could be domain-dependent or independent. This position will be at Umeå University, Sweden. Applicants are expected at minimum to have excellent verbal and written English skills, sound logic, and demonstrated research record/potential in the above area. If interested in this position, please email me at to discuss further. In your first email, please provide your C.V. in PDF.

    Ph.D. Students

    We are looking for a PhD student with focus on Cybersecurity, in particular to work on attacks and defence startegies for the cloud-edge continuum.

    Project description.
    The multilayer cloud-edge continuum poses several challenges, such as smart placement, workload prediction and relocation, energy usage prediction, and security for critical applications and infrastructures. Distributed Denial of Service (DDoS) is one of the critical threats that disrupt the benign services provided by servers based on distributed resources across the Continuum. The key security challenges faced by existing methods, particularly when it comes to DDoS attacks and defence strategies for the cloud-edge continuum. Challenges include unlimited reassignment of resources for microservices under attack, slow reaction time, lack of methods for validation in real environments, and weak kernel architecture in virtualised instances. Moreover, understanding the underlying differences between occasional benign load spikes and massive or stealthy DDoS attacks is unexplored in the cloud-edge continuum. Where machine learning (ML) is deployed both for optimising performance (benign adaptation) and attack detection (DDoS), security researchers face the problem of considering the compound effect of each ML component’s uncertainty.

    This project envisions exploring time-variant learning algorithms to understand the inherent differences between benign and malicious load patterns across the cloud-edge continuum, in particular, DDoS attacks and defence strategies. The plan will explicitly model uncertainty and investigate which protocols, service functions, and dependency chains characterise benign load variations and which ones must be treated as attacks. The impact of modelled attacks will then be assessed in cloud-edge continuum scenarios, where adversaries aim, for example, resource-sharing attacks. This project will further investigate the developed defence methods with three different threat models: stealthy, dynamic, and collateral-damage caused across the continuum, and will measure the overhead of each defence strategy itself in terms of resource use and recovery time.

    The PhD Student will contribute to the Autonomous Distributed Systems (ADS) Lab within the Department of Computing Science. The ADS Lab is an internationally leading research group with a focus from distributed AI to autonomous resource management and modern. The Lab currently comprises over 20 experienced and world-leading research colleagues from more than 10 different countries. For more information, see ADSLab

    The position is funded by The Knut and Alice Wallenberg Foundation through The Wallenberg AI, Autonomous Systems and Software Program (WASP) within a new ambitious NEST project AIR2: AI for Attack Identification, Response and Recovery in collaboration with leading research groups on AI and Cybersecurity from KTH and Linköping University. WASP is Sweden’s largest individual research program ever, a major national initiative for strategically motivated basic research, education and faculty recruitment. The program addresses research on artificial intelligence and autonomous systems acting in collaboration with humans, adapting to their environment through sensors, information and knowledge, and forming intelligent systems-of-systems. The vision of WASP is excellent research and competence in artificial intelligence, autonomous systems and software for the benefit of Swedish industry. Read more: WASP, Sweden

    The graduate school within WASP is dedicated to providing the skills needed to analyze, develop, and contribute to the interdisciplinary area of artificial intelligence, autonomous systems and software. Through an ambitious program with research visits, partner universities, and visiting lecturers, the graduate school actively supports forming a strong multi-disciplinary and international professional network between PhD-students, researchers and industry. Read more: WASP, Graduate School

    CARE group are looking for creative, self-motivated and hardworking graduate students who want to pursue a Ph.D. degree by working with us on machine learning, systems security, privacy-aware learning, anomaly detection, AI security, and distributed systems. Solid background of computer science, with strength in system building, data analysis, algorithm development, usable theory, and cybersecurity are required. If interested in joining our group, please do the following.

    • Read two (2) of our recent publications (from the past 1 years) to make sure that you are interested in what we do.
    • Send me (at a brief summary of your background and a C.V. (PDF or txt files only! Should include your educational background, major coursework, main skills, and any publication), a summary of the 2 papers you read, and what you did not like about them. In the title of your first email, put “Potential Ph.D. student at ADS Lab” to let me know that you have read this note.

    All students admitted and accepted to work in my group will be funded in a form or another for their course of study.

    • Are those Ph.D. positions guaranteed funding?
      Yes. Funding is available to all qualified Ph.D. students.
    • I don't have GRE and TOEFL scores. Can I still apply?
      Yes, GRE and TOEFL (for international students) aren't mandatory, however, we appreaciate to have TOEFL score before admission.
    • I am from country X, can I still apply?
      Anyone from country X can apply, the Department and the University have no problem.
    • How long does it take to finish a Ph.D.?
      Typically 4 years but if you do teaching 20% every year then the funding will be extended upto 5 years.
    • Can you tell me my chances of getting admitted by looking at my C.V.?
      No. Admission is not decided by me.
    • How long should students spend on their research (in your group)?
      Long enough to do their work and graduate.
    • Would it be possible to work with you on project X?
      Only if I am currently working on X.
    • Can we talk? I am a student at UmU
      Yes. Follow the instructions above.
    • Can we talk? I am a student from country X.
      Yes. Follow the instructions above.

    M.Sc. Students

    I have research projects for M.Sc. thesis students. If you are a self-motivated and hardworking student at UmU and would like to work on cool research projects in the above areas, send me an email to find a time to talk. NOTE: Students are expected to spend no less than 20 hours per week on their research as part of this position.

    Undergraduate Students

    Computer Science undergraduate students (senior or junior only) interested in working in our group should first consider taking fundamental computer science courses (e.g., computer networks, operating systems). Students with a solid computer science background and skills (i.e., programming -- preferably in Python, R, Java, algorithms, analysis, and curiosity) should contact me to set up a time to discuss potential projects.


    I can host interns with exceptions and who can complement our research areas of interest.


    I occasionally have space and time to host fully-funded research visitors. Area-wise, I am interested in hosting visitors who can complement our areas of interest, including machine learning, data-driven security, online privacy, distributed learning, robuts learning, anomaly detection, and machine learning perspective optimization. A major theme that we seek collaboration on is big data security, machine learning, AI, trustworthy learning, privacy-aware learning, etc., and addressing representation, processing and analysis, decision-making, optimization mainly for security and privacy applications. If you are interested, email me to discuss such opportunities.