Events and training
La Trobe researchers and research students have access to a wide range of specialised courses, from beginner through to advanced levels in High-Performance Computing, Excel for research, data management and visualisation, cleaning and exploring data, and more. There is no cost to La Trobe researchers and research students for the majority of these courses. Many of these workshops are presented by our partners, Intersect Australia Limited.
Research Education and Development (RED), in collaboration with the Library, have an extensive workshop and seminar program listed on their website.
For Digital Research support, visit the Digital Research Drop in session on the first Tuesday of each month
|Introducing La Trobe’s Online Research Notebook (LabArchives) - Getting stuck into a new project? Discover the advantages of the Online Research Notebook for recording, organising and storing your research. The Notebook enables the digitisation of research from its inception - whether capturing ideas for a literary tome, designing an experiment or acquiring and linking research data.||M||Sign up for a webinar|
Tuesday, 3 March
3 - 4 pm
Digital Research Drop in - first Tuesday of each month - Do you have any questions relating to data management, the Digital Research training programs, programming, use of computing in research, etc? We have experts from ICT and the Library on hand to answer them, or at least point you in the right direction. Drop in at the Research Hub, Library, Level 2, room 2.21.
The topic of focus this month is research data storage.
|M||Drop in at the Research Hub|
|Friday, 6 March|
9:30 am - 4:30 pm
|Programming with R - R is quickly gaining popularity as a programming language of choice for|
statisticians, data scientists and researchers. It has an excellent ecosystem including the powerful RStudio development environment and the Shiny web application framework.
|M||Expression of interest|
|Friday, 20 March|
9:30 am - 4:30 pm
|Programming with MATLAB - MATLAB is an incredibly powerful programming environment with a rich set of analysis toolkits optimized for solving engineering and scientific problems. Built-in graphics make it easy to visualize and gain insights from data and a vast library of pre-built toolboxes lets you get started right away with algorithms essential to your domain.||M||Expression of interest|
|Friday, 10 April|
9:30 am - 4:30 pm
|Collecting Web Data - Web scraping is a technique for extracting information from websites. This can be done manually but it is usually faster, more efficient and less error-prone if it can be automated.||M||Expression of interest|
More workshops - watch this space!
|Workshop title and description|
|Advanced HPC: Parallel Programming - This intensive full-day course introduces different parallel programming methods: OpenMP as a widespread method for a shared memory programming model and MPI as the standard for a distributed memory programming model. It is targeted at C and Fortran programmers|
|Basic Statistics with R - Learn a simple yet powerful way to design and carry out analyses in R. This 1.5 day workshop introduces statistical concepts in a non-technical way and emphasises their practical application in R. The workshop will provide plenty of opportunities to gain hands-on experience and to access in-class support.|
|Cleaning and Exploring your data with Open Refine - Do you have messy data from multiple inconsistent sources, or open-responses to questionnaires? Do you want to improve the quality of your research data by refining it and using the power of the internet? Open Refine is the perfect partner to Excel. It is a powerful, free tool for exploring, normalising and cleaning datasets. In this course you'll work through the various features of Refine by working on a fictional but plausible humanities research project.|
|Data visualisation with Google Fusion Tables - This course is ideal for researchers who work with large data sets and want to convey their research outcomes clearly and persuasively in a visual manner. By creating a heat map by merging geospatial data and crime statistics, participants will gain skills in visualisation that they can apply to their research.|
|Excel Fu for Researchers - Do you have large amounts of data that is messy, incomplete and contains errors? During this workshop you will learn how to use Excel to import, sort, filter, copy, protect, transform, summarise, merge, and visualise research data. Access the course outline here.|
|G*Power Workshop: Sample size analysis for researchers - G*Power is a free statistical software package for power and sample size analysis. It offers point-and-click functionality and covers a wide variety of statistical tests. Presented by the Statistics Consultancy Platform, this full day G*Power workshop includes concepts of statistical power and relevant statistical tests presented in a non-technical way. Designed to be hands-on, the workshop focuses on the practical application of statistical methods.|
|Introduction to Unix - Do you plan to use high performance computing for bioinformatics? Knowledge of the Unix operating system is fundamental to being productive on HPC systems. Command line confidence unlocks powerful computing resources beyond the desktop. It enables repetitive tasks to be automated and it comes with a swag of handy tools that can be combined in powerful ways. This workshop will introduce you to the fundamental Unix concepts and teach you to run programs and write scripts through a series of hands-on exercises.|
Introduction to Programming using Matlab - MATLAB is an incredibly powerful programming environment with a rich set of analysis toolkits optimized for solving engineering and scientific problems.Built-in graphics make it easy to visualize and gain insights from data and a vast library of prebuilt toolboxes lets you get started right away with algorithms essential to your domain.
|Introduction to High Performance Computing (HPC) - HPC allows you to accomplish your analysis faster by using many parallel CPUs and huge amounts of memory simultaneously. This 1-day course will introduce you to the Unix environment and show you how to transfer your data onto, and run software on HPC infrastructure.|
|Introductory Programming Workshop: Python, Unix and Git - Many research fields can benefit from automation and programmatic techniques, ranging from the humanities and social sciences through biomedical sciences and engineering. The tools and techniques taught in this workshop will be of use to anyone who currently uses a computer for their research.|
Introduction to Unix for HPC - High-Performance Computing (HPC) allows you to accomplish your analysis faster by using many parallel CPUs and huge amounts of memory simultaneously. This 2-day course will introduce you to the Unix environment and show you how to transfer your data onto, and run software on HPC infrastructure.
|Managing Data Capture and surveys in REDCap - Would you like to enable secure and reliable data collection forms and manage online surveys? Would your study benefit from web-based data entry? Research Electronic Data Capture (REDCap) might be for you. Access the course outline here.|
|Nectar Research Cloud - Find out what cloud computing is, how it works, how it can benefit your research and what types of service Nectar offers. This course will provide hands-on instructions on how to launch an instance on the Cloud, connect to it, configure it and set up storage so that it can be accessed from the instance and remotely from the office computer.|
|Office365 and One Drive, Delve and Sway - This session will help you to understand the capabilities of Office 365, how to access the apps and apply them to your research or work.|
|Powerful text searching and matching with Regexes - Regular Expressions (regexes) are a powerful way to handle a multitude of different types of data. They can be used to find patterns in text and make sophisticated replacements. Think of them as find and replace on steroids. Come along to this workshop to learn what they can do and how to apply them to your research.|
Regular Expressions on Command - Would you like to use regular expressions with the classic command line utilities find, grep, sed and awk? These venerable Unix utilities allow you to search, filter and transform large amounts of text (including many common data formats) efficiently and repeatably.
|Software Carpentry: Introduction to Unix Shell - Do you want to unlock powerful computing resources beyond the desktop, including virtual machines and High Performance Computing? Unix can enable repetitive tasks to be automated and it comes with a swag of handy tools that can be combined in powerful ways to help with your research.|
Software Carpentry: Introduction to Matlab - MATLAB is an incredibly powerful programming environment with a rich set of analysis toolkits optimized for solving engineering and scientific problems. Built-in graphics make it easy to visualize and gain insights from data and a vast library of prebuilt toolboxes lets you get started right away with algorithms essential to your domain.
|Software Carpentry: Introduction to programming with Python - This one day workshop is aimed at researchers and research students who would like to start learning to code in the Python programming language, a popular language for scientific computing.|
|Software Carpentry - Intro to R - R is quickly gaining popularity as a programming language of choice for statisticians, data scientists and researchers. It has an excellent ecosystem including the powerful RStudio development environment and the Shiny web application framework. However, getting started with R can be challenging, particularly if you've never programmed before. That's where this introductory course comes in.|
|Software Carpentry: Introduction to version control with Git - Have you mistakenly overwritten programs or data and want to learn techniques to avoid repeating the loss? Version control systems are one of the most powerful tools available for avoiding data loss and enabling reproducible research.|
Using Databases and SQL - Do you need a better way to store your structured research data? Structured Query Language (SQL) is the standard means for reading from and writing to databases. Databases use multiple tables, linked by well-defined relationships, to store large amounts of data without needless repetition while maintaining the integrity of your data. Moving from spread sheets and text documents to a structured relational database will reward you many times over in speed, efficiency and power.
|Basic statistics with STATA - This workshop introduces statistical concepts in a non-technical way and emphasises their practical application in STATA. The workshop will provide plenty of opportunities to gain hands-on experience and to access in-class support.|