Laura Uzcategui

Verified
Hi 👋, 

I'm Laura, Software Engineer with a love for Machine Learning and Maths.

As a book lover I could chat for hours about books and more books 📚💕
Read more
Positions

Student @ AI Program

  • Stanford University
  • Nov 2021 - Present

Software Engineer

  • Microsoft
  • Jul 2020 - Oct 2021

Software Engineer ML Platform

  • Workday
  • May 2016 - Feb 2020

Master's Degree in Project Management

  • La Salle Campus Barcelona
  • Sep 2012 - Jul 2013

BSc Computer Science

  • Universidad Central de Venezuela
  • Sep 2003 - Dec 2008

Collections

Speaking Events

1 Highlight

Books

1 Highlight

What Laura's working on

What Laura's working on

2021

Nov 18, 2021
Nov 18, 2021
Studying Machine Learning
Studying Graph theory
I've been crazy busy with my new course 😁 

Currently I'm learning how you can define a search problem and use any search algorithm and it's like magic happens it will return the best path for the solution you need. 

A search problem definition:

- A set of states, that will help you to model th current state of the world and all its possible combinations

- A set of actions, that you might perform at any state to help you reach the goal

- A successor function, that based on certain initial and end state will help you transition from one state to the next state, and it has a cost associated to it

- An initial state, where are you starting in your problem. The initial representation of where you are. 

- An end state, will help you validate if you have reach the goal. It's also called the goal test 

This topic is really interesting as it will help you define any problem you might have and work through a solution using search algorithms. 

In between search algorithms we have seen in class so far:

  • DFS or Deep first search.
  • BFS or Breath First Search.
  • DFS with iterative deepening, a combination of DFS with BFS.
  • UCS, or Universal Cost Search 
  • A* , similar to UCS but use heuristic functions and move in the direction towards a goal.

Nov 03, 2021
Nov 03, 2021
Started at university
Studying Machine Learning
I’ve happily started my journey into AI Program at Stanford. 

Highlights:

- Lectures are dense and heavy in content but really good ones. So far it keeps you interested to continue watching and researching on your own. 
- The approach of the content is not to start straight away with ML but rather analytical thinking on how to solve problems computationally and logically speaking. 

I started by taking CS221 - AI : Principle and techniques. 


Nov 01, 2021
Nov 01, 2021
Started a new role at Stanford University
Excited to join Stanford University as Student @ AI Program! 🎉
Student @ AI Program, Stanford University
Oct 22, 2021
Oct 22, 2021
Attended workshop
Using machine learning
Learning Transformers
+ 1
Attended Hugging Face Transformers Workshop.

Highlights
  • It started describing what attention mechanisms are and how is the architecture 
  • Showcased the concepts of Transformers, why and where are used. Also it was described where architectures are used or focused on the side of encoder or decoder. Such  as BERT, T5 and GPT. 
  • Pre-training to train of a big corpus is usually done by big companies, and what you could do is perform Fine-Tuning in your own datasets.
  • The main challenges: 
    • Language Barriers: such as being trained on English or European languages
    • Data hungry: Pre train modeling required huge amount of data and it usually cost a lot of money 
    • Black boxes like Neural Nets, and we don't know exactly what is happening in terms of causality
    • Biases: as the models are trained from the internet it comes with Bias, it can lean towards, stereotypes, racism etc. 
  • The workshop was based on demonstrating and explaining Hugging Face ecosystem where you can basically do/use tasks such as:
    • Pipelines: a high level abstraction that given an input can give you predictions, it comes with a default model but you can set it to your own 
    • Datasets: 
  • You could have tasks performing the following:
    • Entity Recognition: detects entities on a text. It can be useful on performing entity extraction and automation
    • Question answering: having a text and a question, the model will provide an answer extracted from the input
    • Summarization: given a text, it can generate a summary from it. 
    • Zero-shot classification: given a text and classes, the model is able to infer the probabilities related to the compatibility of the class. 
It was also shown how you can work with : 
  • Datasets library, you could download a dataset from the hub and work with it in a matter of seconds using a pipeline. 
  • Spaces: a good way to write an application and have it in your own space, where you can basically write a blog or do a demo. 

Link to the workshop: https://github.com/huggingface/workshops/tree/main/machine-learning-tokyo

Model we created based on Amazon reviews corpus: https://huggingface.co/laurauzcategui/xlm-roberta-base-finetuned-marc-en

Oct 19, 2021
Oct 19, 2021
Gave a talk
Machine learning
Speak about Tensorflow
+ 1
Today, I gave a talk at Women In ML Symposium about my journey in ML. 

It was fun. 

Few highlights: 
  1. Get started by sharpening your skills about handling data. 
  2. If you are really curious on how ML works in the background, take Linear Algebra 18.06 @ MIT and Mathematics for Machine Learning by Imperial College of London.
  3. If you want to dive into practice, checkout Tensorflow tutorials on their website or get a Deep Dive into Kaggle. 
  4. Community Support & Mentoring is important as they will always have your back and you can share what you learn. It will serve as a good checkpoint. 
  5. Make a plan, but also accommodate during uncertainty ( covid , family etc ) 
Oct 15, 2021
Oct 15, 2021
Studying Machine Learning
Working on ML-Zoomcamp Week 06 - Decision Trees

  • Decision Trees (DTs) are used as a supervised method to do Regression and Classification. 
  • DTs are based on the decisions taken based on features splits. For example: Number of Rooms in the house ( 0, 1, 2, >3) and based on this the tree will have decision nodes. 
  • Each leaf will represent a decision. For example: House is Expensive vs. House is not expensive. 
  • You can tune the parameters of the tree such as: the depth of the tree, ideally, you don't want to do a full depth, as you will end up overfitting. 
  • Last but not least, you can you ensemble methods, where basically you can have a number of estimators and the decision will be based on averaging the decision of all estimators. 
  • Yet another interesting technique is the use of XGBoost as an ensemble algorithm, which is recognized by its performance. It works using bootstrap technique for sampling from the original dataset. 

Loading...
Get your own page like this