Events

The Department presents a series of regular colloquiums and seminars, engaging with universities and industry bodies around the world.

Melbourne - Statistics

Gossip, Small Worlds, and the spread of disease

Professor Andrew Barbour (Universität Zürich, University of Melbourne and Monash University)

In the modern world, social and casual contact can be described as a mix of regular local contact, supplemented by occasional long-range contact.  The SARS epidemic, for instance, presented a number of foci that were widely separated in space, presumably for precisely this reason. There have been a number of different models advanced that reflect this idea:  the Watts--Strogatz `small worlds'  model of social interaction, the Ball, Mollison and Scalia- Tomba `great circle' model of infection, and the Aldous  model for the spread of gossip.  In this talk, we show that these models are all closely linked mathematically, with  branching processes being the common feature, and that, perhaps surprisingly, branching processes can be used to  deliver a broad description of the whole course of their spread, and not just of the early stages of development.

General Semi-Markovian Modelling of Limit Order Books

Professor Anatoliy Swishchuk (University of Calgary)

In this talk, we consider several extensions of R. Cont and A. de Larrard (SIAM J. on Finan. Math., 2013) Markovian model to semi-Markov model suggested by empirical observations. One of them is to extend their framework to 1) arbitrary distributions for book events inter-arrival times (possibly non-exponential) and 2) both the nature of a new book event and its corresponding inter-arrival time depend on the nature of the previous book event (not independent). The dynamics of the bid and ask queues are modeled by Markov renewal process and the mid-prices - by a semi-Markov process. We justify and illustrate the approach by calibrating our model to the five stocks, Amazon, Apple, Google, Intel, Microsoft, on June 21st, 2012 (Lobster data), to the 15 stocks from Deutsche Boerse Group (September 23d, 2013), and to Cisco asset (November 3d, 2014). The second extension is associated with the case when the price changes are not fixed at one tick. And the third one is related to the case with arbitrary number of states for the price changes. For both cases the justification, diffusion limits, implementations, statistical and numerical results are presented for different LOB data: Lobster data, and Cisco, Facebook, Intel, Liberty Global, Liberty Interactive, Microsoft, Vodafone from 2014/11/03 to 2014/11/07. The talk is based on two research papers written with my ex-students: Nelson Vadori (JP Morgan, NY), Julia Schmidt and Katharina Cera (TUM, Munich, Germany).

Assessing Heterosis using Genetic Distance Formulas

Professor Bob Anderssen (CSIRO Data61)

Various genetic distance (GDist) formulas have been proposed for formalizing and quantifying the concept of genetic difference (GDiff), so that heterosis phenotypes, for different choices of parents that are crossed, can be compared quantitatively. Biologically, GDist formulas, such as that of Jaccard, Nei and Nei and Li, are defined in terms of the proportional presences, in the paternal and maternal parents, of trigger point regions (TPRs) assumed to be driving the associated genetics, such as transposable elements in promoters. However, such formulas do not allow the relative importance biologically of the proportional presences to be taken into account. Here, a parametric formula is proposed, analysed and validated, where the value of the parameter can be chosen to take account of the perceived relative biological performance of the individual proportional presences. The validation is based on the fact that, for particular choices of the parameter, some of the traditional formulas, such as Jaccard and Nei, are recovered which in turn highlight how such formulas are giving different weightings to the proportional presences. Mathematical and stochastic aspect associated with the study and modelling of heterosis are discussed. This is joint work with Ming-Bo Wang, CSIRO Agriculture and Food, and Mark Westcott, CSIRO Data61.

The Mathematics of MCMC

Professor Jeffrey Rosenthal (University of Toronto)

Thursday 15 December Abstract: Markov chain Monte Carlo (MCMC) algorithms, such as the Metropolis Algorithm and the Gibbs Sampler, are extremely useful and popular for approximately sampling from complicated probability distributions through repeated randomness. This talk will use simple graphical simulations to explain how these algorithms work, and why they are so useful.

Asymptotic properties of the partition function and applications in tail index inference of heavy-tailed data

Professor M. Leonenko (Cardiff University, UK)

Friday 16 September 2016

Abstract: Heavy tailed distributions are of considerable importance in modelling a wide range of phenomena in finance geology, hydrology, physics, queuing theory and telecommunication. We develop a new method for estimating unknown tail index for independent and dependent data. Our estimator is based on a variant of statistics sometimes called empirical structure function or partition function. Joint work with D. Grahovac (Osijek University, Croatia) and M. Taqqu (Boston University, USA).

The shapes of things to come

Professor Robert Staudte (La Trobe University)

The study of shapes of probability distributions, whether discrete or continuous, is simplified by viewing them through the revealing composition of density with quantile function. Following normalisation, the resulting probability density quantiles (pdQ s) carry essential information regarding shapes, which enables simple classification by asymmetry and tail weights.

The pdQ s are not only location-scale free, but densities of absolutely continuous distributions having the same support [0,1]. This allows for comparisons between them using metrics such as the Hellinger or Kullback-Leibler divergences. Empirical estimates for both discrete and continuous cases will be described. Asymmetry is measured in terms of distance from the symmetric class, and tail-weight is defined in terms of boundary derivatives. Finally, divergence from, and convergence to, uniformity will be illustrated.

Cancer services research

Professor Terry Mills, Loddon Mallee Integrated Cancer Service

Time & Date: 12.00 pm Friday 9 October 2015

Venue: Room 310 (ACE Room), Physical Sciences 2, La Trobe University, Melbourne Campus.

Abstract:

Cancer continues to be a major contributor to the burden of disease in Australia as well as many other developed countries. Governments around the world invest heavily in planning how to deal with this disease.

Loddon Mallee Integrated Cancer Service is part of a network of similar organisations across Victoria, and is supported by the government of Victoria. Each integrated cancer service organisation is based in a public hospital, and Loddon Mallee Integrated Cancer Service is based in Bendigo Health. These integrated cancer service organisations play a key role in developing cancer services in the state. Day-to-day work generates problems that can be classified as problems in cancer services research.

Earlier this year, Professor Terry Mills began writing down problems that came across his desk and required some mathematical modelling. In this seminar, he will describe these problems and provide you with a general understanding of cancer services research based on his experience at Loddon Mallee Integrated Cancer Service.

Forecasting with temporal hierarchies

Dr George Athanasopoulos, Associate Professor of Econometrics & Business Statistics (Monash University)

Time & Date: 12.00 pm Friday 11 September 2015

Venue: Room 310 (ACE Room), Physical Sciences 2, La Trobe University, Melbourne Campus.

Abstract:

This paper introduces the concept of temporal hierarchies for time series forecasting. A temporal hierarchy can be constructed for any time series by means of non-overlapping temporal aggregation. Predictions constructed at all aggregation levels are combined with the proposed framework to result in temporally reconciled, accurate and robust forecasts.

The implied combination mitigates modelling uncertainty, while the reconciled nature of the forecasts results in a unified prediction that supports aligned decisions at different planning horizons: from short-term operational planning to long-term strategic planning.

The proposed methodology is independent of forecasting models. It can embed high level managerial forecasts that incorporate complex and unstructured information with lower level statistical forecasts. Our results show that forecasting with temporal hierarchies increases accuracy over conventional forecasting, particularly under increased modelling uncertainty.

We'll discuss the organisational implications of the temporally reconciled forecasts using a case study of Accident & Emergency departments.

Fractional extensions of the Poisson process

Professor Enzo Orsingher (Sapienza University of Rome)

Time & Date: 1.00 pm Tuesday 25 August 2015

Venue: Room 310 (ACE Room), Physical Sciences 2, La Trobe University, Melbourne Campus.

Abstract:

In this talk we consider several extensions of the Poisson process starting from the:

time-fractional Poisson process

We consider the equations governing the state probabilities and replace the time derivative with the so-called Djerbayshan-Caputo fractional derivative. We prove that the time-fractional Poisson process is a renewal process with Mittag-Leffler distributed intertimes;

space-fractional Poisson process

We construct a Poisson process with independent increments and multiple jumps whose distribution satisfies the state-probabilities equation

∂p_k/∂t =−λ^α(I−B)^αp_k,

where B is the shift operator

  • other generalisations of the Poisson processes are presented (which are weighted sums of the homogeneous Poisson process)
  • the iterated Poisson process will be examined
  • starting from ∂p_k/∂t =−f(λ(I−B))p_k

where f is a Bernstein function, we obtain generalized Poisson processes with multiple jumps and independent increments.

Statistical evaluation of low-template DNA profiles

Professor David Balding (University of Melbourne)

Time & Date: 12.00pm Friday 31 July 2015

Venue: Room 310 (ACE Room), Physical Sciences 2, La Trobe University, Melbourne Campus.

Abstract:

The evaluation of weight-of-evidence for low-template and/or degraded DNA profiles has been a subject of controversy over many years, not least because of the possibility of small amounts of "environmental " DNA affecting the sample, often referred to as "contamination ". I will review the statistical models, and issues surrounding the presentation in court of results from such models. I will draw on my experiences in the courts of several countries, predominantly the UK.

Robust negative binomial regression with application to falls data

A/Professor Stephane Heritier (Monash University)

Time & Date: 12.00pm Friday 22 May 2015

Venue: Room 310 (ACE Room), Physical Sciences 2, La Trobe University, Melbourne Campus.

Abstract:

Negative binomial (NB) regression is commonly used in intervention studies aiming at reducing the number of falls in patients with degenerative disease. Difficulties arise with classical estimation methods (maximum likelihood (ML) & moment-based estimators) when unexpected large counts are observed like the so-called 'multiple fallers' in Parkinson's disease sufferers.

In this work we extend two approaches for building robust M-estimators developed for generalised linear models to NB regression. Robustness in the response is achieved by either 1) applying a bounded function to the Pearson residuals arising in the ML score equations; or 2) bounding the unscaled deviance components. An auxiliary weighted maximum likelihood estimator is introduced for the overdispersion parameter. We explore the impact of using different bounding score functions for both approaches and derived the asymptotic distribution of the new proposals. Simulations show that redescending estimators display smaller biases under contamination while remaining reasonably efficient at the assumed model. A unified notation shows that both approaches are actually very close provided that the bounding functions are chosen and tuned appropriately. An application to the PD-Fit study data, a recent intervention trial in Parkinson's disease patients, will be presented as illustration.

It is a joint work with:

William Aeberhard, School of Public Health, University of Sydney, Sydney
Eva Cantoni, Research Centre for Statistics, University of Geneva, Geneva
Reference

Aeberhard W, Cantoni E, Heritier S (2014). "Robust inference in the negative binomial regression with an application to falls data ", Biometrics, 70(4):920-931

Making the best of data independent acquisition mass spectrometry: tools to cut a diamond from a lump of coal

Dr Hyungwon Choi (National University of Singapore)

Time & Date: 12:00pm Friday 24 April 2015.

Venue: Room 310 (ACE Room), Physical Sciences 2, La Trobe University, Melbourne Campus.

Abstract:

Data Independent Acquisition (DIA) is a novel mode of mass spectrometry (MS) analysis that can generate MS/MS data for an unbiased selection of peptides, offering new opportunities to achieve more complete proteome coverage for downstream quantitative analysis. However, data extraction methods for DIA data are still in early development stage and few statistical approaches are available for this emerging platform of quantitative proteomics data.

Here I will describe a complete computational pipeline to extract DIA data and perform robust statistical analysis from the resulting MS2-level quantitative data. This workflow consists of two software packages called DIA-Umpire and MAP-DIA. DIA-Umpire performs spectral library-independent and dependent extraction of MS1 and MS2 level peak areas to generate the base material for quantitative analysis. This data will be further refined and analyzed by MAP-DIA, which performs essential data preprocessing, including novel retention time-based normalization method and a sequence of peptide/fragment selection steps. MAP-DIA also offers hierarchical model-based statistical significance analysis for multi-group comparisons under representative experimental designs (e.g. time course).

Using a comprehensive set of simulation datasets, it will be shown that MAP-DIA provides reliable classification of differentially expressed proteins with accurate control of the false discovery rates. I will also illustrate MAP-DIA using two recently published SWATH-MS datasets of 14-3-3 dynamic interaction network and prostate cancer glycoproteome, with detailed illustration of the data preprocessing steps and statistical analysis.

Fractional factorial design for large computational experiments

Dr Tom Peachey (University of Queensland)

Time & Date: 12:00pm Friday 27 March 2015.

Venue: Room 310 (ACE Room), Physical Sciences 2, La Trobe University, Melbourne Campus.

The use of computer modelling has become pervasive throughout research. This talk describes the Nimrod toolset that expedites large experiments of this kind. In particular, a new algorithm for Fractional Factorial Design, incorporated in Nimrod, extends such experiments to up to 130 factors.

Tom Peachey combines an interest in classical analysis with another building computational models, sometimes in the same paper.

On phase transitions for specifying the distributions of stochastic processes

Associate Professor Aihua Xia (University of Melbourne)

12:30pm, Friday 12 September, 2014
Room 310 (Access Grid Room), Physical Sciences 2, La Trobe University, Melbourne (Bundoora) Campus

In this talk, I'll look at the relationship between uncorrelation and independence, and deduce the conditions that make these two concepts equivalent. These conditions are then "projected " to obtain the common phase transition phenominon for specifying the distributions of stochastic processes, including the well-known Poisson process.

A functional data approach to the analysis of gait patterns and kinematics indices

Dr Julia Polak (University of Melbourne)

Friday 15 August 2014

Recently attention in the field of gait analysis has been devoted to defining so-called "kinematics normalcy indices ". These indices have several aims. First, to define a metric that allows a comparison between an arbitrary gait pattern of the particular patient of interest and a nominally "normal " gait pattern from the general population. Second, to detect and measure changes in the gait pattern of an individual before and after an intervention or over time. Such metrics are important in clinical cohort studies where investigators need to evaluate the effect of a treatment on the gait pattern. The majority of the indices commonly used in practice are constructed point by point, which ignores the functional nature of the data generated by gait analysis.

In this study, a typical dataset was collected by motion sensors connected to the hips, knees and ankles of the examined individual. At each of these three joints, the sensor records the joint movement in three-dimensions over time normalized by gait cycle. The nature of the dataset suggests a functional approach to the analysis.

This talk presents the new proposed "functional " index, based on data depth notion, and compares its performance to other commonly used indices. All illustrations presented use real data collected by the researchers at the gait analysis laboratory. We show that the index created from functional analysis tools takes into account the shape of the waveform of the gait pattern and, therefore, provides a more meaningful tool than the existing approaches.

The incidence of cancer

Professor Terry Mills  (Loddon Mallee Integrated Cancer Service)

Friday 4 July 2014

There is widespread interest in measuring the risk of being diagnosed with cancer. Internationally, especially in developed countries, governments collect and use data on the incidence of cancer for strategic planning to ensure that the nation has the resources that will be required to deal with the disease. Incidence data can also be used to assess the effectiveness of public health campaigns. However, there are several measures for quantifying the incidence of cancer. This paper examines the cumulative incidence rate. In this paper, we present a review of the method for estimating the cumulative incidence rate of cancer, and comparing these rates in two populations. We discuss the connection between the cumulative incidence rate and the cumulative risk of being diagnosed with cancer by a certain age, and provide details of the underlying mathematical ideas. This work has been conducted in collaboration with Christopher Lenard (Mathematics and Statistics, La Trobe) and Ruth F.G. Williams (Economics, La Trobe).

Diffusion in heterogeneous domains

Professor Vo Anh (Queensland University of Technology)

Friday 23 May 2014

This work is motivated by the problem of saltwater intrusion in coastal aquifers. These underground aquifers can be a major source of water for irrigation, industrial and town water usage. Due to their proximity to the ocean, excessive demand for groundwater may result in saltwater intrusion with a substantial loss of agricultural land. It is therefore an essential task to develop suitable mathematical models and computational tools for visualisation and prediction of salinity diffusion in aquifers for scenario analysis and management planning.
Due to the complex nature of the aquifers, with alternating layers of permeable sandstone and impermeable clay sediments, it is a challenge to understand the role of heterogeneity in the aquifer and in the physical law governing the saltwater flow. We will formulate the problem in the framework of fractional diffusion in heterogeneous domains. We will outline the existing theory of diffusion on fractals, then move on to a theory of diffusion on multifractals based on the RKHS approach. This latter theory yields a class of models for fractional diffusion with variable singularity order in heterogeneous domains. Wavelet-based expansions of these random fields are employed to obtain finite-dimensional approximations of their linear estimation. A simulation study is presented to illustrate the results.

Derivation and predictive accuracy of regression-based pedotransfer functions

Chief Biometrician Dr Subhash Chandra (Agriculture Research division, Department of Environment and Primary Industries)

Friday 11 April 2014

Direct measurement of many soil properties is time consuming and expensive. Pedotransfer functions (PTFs) help overcome these limitations through, usually regression-based, derivation of an empirical relationship of any such costly-to-measure soil property with other potentially correlated but more easily and economically measurable soil properties. The data for derivation of these PTFs emanate from soil samples that are usually collected using a nested spatial sampling design. Soil science literature review suggests that derivation of PTFs has not been based on accounting for this nested spatial sampling structure in data. Sound scientific modelling of data, the derivation of PTFs being no exception, needs to recognize and to appropriately account for this structure in data in order to generate scientific inferences that the data actually support. This talk, taking field capacity (FC) as an example PTF, aims to demonstrate this simple principle using mixed effects regression models to derive PTFs that the data actually support and an assessment of their predictive accuracy using Akaike information criterion.

Which Random Effects Model?

Professor Elena Kulinskaya and Ilyas Bakbergenuli (University of East Anglia)

Random effects model (REM) in meta-analysis incorporates heterogeneity of effect measures across studies. We are interested in combining odds ratios from K 2x2 contingency tables. The standard (additive) REM is the random intercept model in 1-way ANOVA for log-odds ratios. Alternatively, heterogeneity can be induced via  intra-cluster correlation, say assuming beta-binomial distributions. This (multiplicative) model is convenient for  defining REM in conjunction with the Mantel-Haenzsel approach. Our method of estimating intra-class correlation (assumed constant across studies) is based on profiling the  modified  Breslow-Day test. Coverage of resulting confidence intervals is compared to standard methods through simulation.

Unexpectedly, we found that the standard methods are very biased in the multiplicative REM, and our new method is very biased in the standard REM. The explanation lies in the general (but new to us) fact that any function of a random variable is biased under REM.  The question on what exactly is random under REM is a difficult question for a frequentist...

On the distribution of integration error by randomly-shifted lattice rules

Professor Pierre L'Ecuyer (Université de Montreal)

Wednesday 19 February 2014

A lattice rule with a randomly-shifted lattice estimates a mathematical expectation, written as an integral over the s-dimensional unit hypercube, by the average of n evaluations of the integrand, at the n points of the shifted lattice that lie inside the unit hypercube. This average provides an unbiased estimator of the integral and, under appropriate smoothness conditions on the integrand, it has been shown to converge faster as a function of n than the average at n independent random points (the standard Monte Carlo estimator). In this talk, we study the behavior of the estimation error as a function of the random shift, as well as its distribution for a random shift, under various settings. While it is well known that the Monte Carlo estimator obeys a central limit theorem when n→∞, the randomized lattice rule does not, due to the strong dependence between the function evaluations. We show that for the simple case of one-dimensional integrands, the limiting error distribution is uniform over a bounded interval if the integrand is non-periodic, and has a square root form over a bounded interval if the integrand is periodic. We find that in higher dimensions, there is little hope to precisely characterize the limiting distribution in a useful way for computing confidence intervals in the general case. We nevertheless examine how this error behaves as a function of the random shift from different perspectives and on various examples. We also point out a situation where a classical central-limit theorem holds when the dimension goes to infinity, we provide guidelines on when the error distribution should not be too far from normal, and we examine how far from normal is the error distribution in examples inspired from real-life applications.

Speaker Bio

Pierre L'Ecuyer is a Professor in the Département d'Informatique et de Recherche Operationnelle at the Université de Montreal. He currently holds the Canada Research Chair in Stochastic Simulation and Optimization and an Inria International Chair (at Inria-Rennes) for 2013-2018. He obtained the Steacie Fellowship from the Natural Sciences and Engineering Research Council of Canada (NSERC) in 1995-97, twice the INFORMS Simulation Society Outstanding Research Publication Award, in 1999 and 2009, the Distinguished Service Award in 2011, a Killam Research Fellowship in 2001-03, the Urgel-Archambault Prize from ACFAS in 2002, and was elected INFORMS Fellow in 2006.

He has published over 230 scientific articles and book chapters in various areas, including random number generation, quasi-Monte Carlo methods, efficiency improvement in simulation, sensitivity analysis and optimization for discrete-event simulation models, simulation software, stochastic dynamic programming, and applications in finance, manufacturing, telecommunications, reliability, and service center management. He also developed software libraries and systems for the theoretical and empirical analysis of random number generators and quasi-Monte Carlo point sets, and for general discrete-event simulation. His work impinges on the areas of mathematics, statistics, operations research, economics, and computer science.

He is currently Associate Editor for ACM Transactions on Mathematical Software, Statistics and Computing, Cryptography and Communications, and International Transactions in Operational Research. He was Editor-in-Chief for the ACM Transactions on Modeling and Computer Simulation until June 2013. He has been a referee for over 120 different scientific journals. He was a professor in the Département d'Informatique at Université Laval (Quebec) from 1983 to 1990 and is at the Université de Montreal since then. He has been a visiting scholar (for several months) at Stanford University (USA), INRIA-Rocquencourt (France), Ecole des Mines (France), Waseda University (Tokyo), University of Salzburg (Austria), North Carolina State University (USA), and INRIA-Rennes (France). He is a member of the CIRRELT and GERAD research centers, in Montreal.

The harmful effect of preliminary model selection on confidence intervals

Associate Professor Paul Kabaila (La Trobe University)

Friday 14 February

It is very common in applied statistics to carry out a preliminary statistical (i.e. data-based) model selection by, for example, using preliminary hypothesis tests or minimizing a criterion such as the Akaike Information Criterion (AIC). This is often followed by the construction, using the same data, of a confidence interval, with nominal coverage 1 – alpha, for the parameter of interest based on the assumption that the selected model had been given to us a priori (as the true model). This assumption is false and it typically leads to confidence intervals with minimum coverage probabilities far below 1 – alpha, making them completely inadequate. In practice, a wide variety of forms of statistical model selection have been proposed, for a variety of models. It is important that all of these forms of model selection are carefully analyzed with respect to their effect on subsequently-constructed confidence intervals.

As pointed out by Kabaila (2009), preliminary statistical model selection is sometimes motivated by a desire to utilize uncertain prior information. Uncertain prior information may result from previous experience with similar data sets and/or expert opinion and scientific background. This brings us to the second part of the talk: the construction of confidence intervals that utilize uncertain prior information, without the intermediate step of model selection.  A branch of this type of work was initiated by the eminent statisticians Charles Stein and John Pratt. Another branch (dealing with different models, different kinds of uncertain prior information and different desiderata) is due to Paul Kabaila and PhD students Jarrod Tuck, Khageswor Giri, David Farchione, Dilshani Tissera and Waruni Abeysekera.

Reference: Kabaila, P. (2009). The coverage properties of confidence regions after model selection. International Statistical Review, 77, 405-414.

Statistical analyses dealing with water and environment

Dr Mark James Fielding (DHI Water and Environment)

Monday 16 December 2013

With varied methods used for multivariate extreme value analyses, a number of different techniques are compared, toward the development of a more standard approach. Issues will be addressed with the determination of univariate and bivariate extreme levels, with applications in extreme rainfall.

Time permitting, other areas of statistical analyses in Water & Environment may touched upon. Including Gaussian process emulation of nonlinear computer outputs.

On the size accuracy of standard tests in adaptive clinical trials

Professor Chris J. Lloyd (Melbourne Business School)

Friday 8 November 2013

Adaptive clinical trials involve selection of the most promising treatments at the first stage and re-testing of the selected treatments at a second stage. There will be P-values for testing each treatment at stage one, and additional P-values for testing the selected treatments at stage two. The P-values need to be combined into a single summary P-value that accounts for the selection and the multiple stages.

Combining evidence from several (often two) stages is achieved through a so-called combination function. Accounting for selection is achieved through a so-called multiple comparison function such as the Bonferroni or Simes functions. When control of the family-wise error rate is required, these methods are inserted into the so-called close testing principle of Marcus et al (1976). All these methods give rise to valid and even exact tests provided that the P-values for each treatment are valid or exact.

The problem is that for discrete data most P-values are far from valid or exact. This is an amazing oversight and makes the elaborate theory for combining P-values rather moot. The purpose of this paper is to numerically and theoretically examine the extent to which combining basic tests statistics mitigates or magnifies the size violation of the final test.

Computing highly accurate frequentist confidence limits from discrete data

Professor Chris J. Lloyd (Melbourne Business School)

For discrete data, frequentist confidence limits based on a normal approximation to standard likelihood based pivotal quantities can perform poorly even for quite large sample sizes. To construct exact limits requires the probability of a suitable tail set as a function of the unknown parameters. In this paper, importance sampling is used to estimate this surface and hence the confidence limits. The technology is simple and straightforward to implement. Unlike the recent methodology of Garthwaite and Jones (2009), the new method allows for nuisance parameters; is an order of magnitude more efficient than the Robbins-Monro bound; does not require any simulation phases or tuning constants; gives a straightforward simulation standard error for the target limit; includes a simple diagnostic for simulation breakdown.

Confidence sets for variable selection

Dr Davide Ferrari (The University of Melbourne)

Friday 13 September 2013

We introduce the notion of variable selection confidence set (VSCS)  for linear regression based on F-testing. The VSCS extends the usual notion of confidence intervals to the variable selection problem: A VSCS is a set of regression models that contains the true model with a given level of confidence. For noisy data, distinguishing among competing models is usually very difficult and the VSCS will contain many models; if the data are really informative, the VSCS will contain a much smaller number of useful models. We advocate special attention to the set of lower boundary models (LBMs), which are the most parsimonious models that are not statistically significantly inferior to the full model  at a given confidence level. Based on the LBMs, variable importance and measures of co-appearance importance of predictors can be naturally defined.

Up to date, an almost exclusive emphasis has been on selecting a single model or two. In the presence of a number of predictors, especially when the number of predictors is comparable to (or even larger than) the sample size, the hope of identifying the true or the unique best  model is often unrealistic. Consequently, a better approach is to select a relatively small set of models that all can more or less adequately explain the data at the given confidence level. This strategy identifies the most important variables in a principled way that goes beyond simply trusting the single lucky winner based on a model selection criterion.

Averaging in the right way: how to combine data into a representative valueion

Professor G. Beliakov (Deakin University)

Friday 6 September 2013

In this lecture I will talk about aggregating data, and specifically about averaging aggregation. Our aim is to define some sort of a representative value of the data, and the arithmetic mean is the simplest way to do this. But is it the right way? I will present many other means that do a similar job, and will talk about their properties and applicability. From the arithmetic mean I will move to fuzzy measures and integrals and discuss how these tools allow one to take into account dependencies (redundancy, complimentarity) between the inputs, and avoid double counting. I will then discuss penalty based aggregation, where the average value is found by minimising some sort of a penalty for lack of consensus. This construction is so general that it allows one to obtain all possible averaging functions through minimising a suitable penalty. Further on I will discuss some very recent developments in this area, including non-monotone averaging (and noise filtering).

Applications of averaging are abundant, from statistics to signal processing, voting, game theory, image processing, data analysis, decision making, and so on. In this lecture I will focus on image processing and multiattribute decision making problems.

Presenter biography

Professor G. Beliakov has been with Deakin University since 1999, when he started as a lecturer after several postdoc positions in Australia and overseas. He is a Senior Member of IEEE and is an Associate Editor of two flagship journals in the area of fuzzy systems. His research focuses on computational mathematics, numerical data analysis, fuzzy systems and aggregation operators. He is an author of a monograph on aggregation functions, two edited volumes and over 150 refereed papers.

Sequential meta-analysis or how many more studies do we need?

Professor Elena Kulinskaya (University of East Anglia)

Friday 16 August 2013

Meta-analysis has applications for the design of new trials, with a view of finding the definite answer to the existing research question. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis (FEM), but conceptual difficulties arise in the random effects model (REM). We briefly review several existing approaches.

One approach applying sequential meta-analysis to design is "Trial Sequential Analysis ' (TSA), developed by Wetterslev, Thorlund, Brok, Gluud and others from the Copenhagen Trial Unit. In TSA, information size is based on the required sample size of a single new trial which, in REM, is obtained by simply inflating it in comparison to FEM.

However, this is not sufficient as, depending on the amount of heterogeneity, a minimum of several new trials may be indicated, and the total number of new patients needed may be substantially reduced by planning an even larger number of small trials. We provide explicit formulae to determine the requisite minimum number of trials and their sample sizes within this framework, which also exemplify the conceptual difficulties referred to.

All these points are illustrated with two practical examples, including the well-known meta-analysis of magnesium for myocardial infarction.

This is a joint work with John Wood, UCL.

Accurate robust inference

Professor Elvezio Ronchetti (Research Center for Statistics and Department of Economics, University of Geneva)

Friday 14 June 2013

Classical statistics and econometrics typically rely on assumptions on the structural and the stochastic parts of the model and on optimal procedures derived under these assumptions. Standard examples are least squares estimators in linear models and their extensions, maximum likelihood estimators and the corresponding likelihood-based tests, and GMM techniques in econometrics. Inference is typically based on approximations obtained by standard first-order asymptotic theory.

However, in the presence of small deviations from the assumed model, this can lead to inaccurate p-values and confidence intervals. Moreover, when the sample size is moderate to small or even in large samples when probabilities in the tails are required, first-order asymptotic analysis is often too inaccurate.

In this talk we review a class of techniques which combine robustness and good accuracy in finite samples. They are derived using saddlepoint methods and provide robust tests for testing hypotheses on the parameters and for overidentification which are second-order correct in terms of relative error.

Their nonparametric versions are particularly appealing as they are linked to empirical likelihood methods, but exhibit better accuracy than the latter in finite samples even in the presence of model misspecifications.

The theory is illustrated in several important classes of models, including linear and generalized linear models, quantile regression, composite likelihood, functional measurement error models, and indirect inference in diffusion models.

Confidence intervals for lattice-valued random variables

Dr Geoffrey Gerard Decrouez (University of Melbourne)

Friday 7 June 2013

Motivated by applications in biosecurity, I will compare the properties of different asymptotic confidence intervals for sums of binomial proportions. Particular focus will be on the oscillations present in the coverage probability of these intervals, due to the discrete nature of the binomial distribution. We then propose a method, called split-sample method, to reduce these oscillations in the context of a single binomial proportion. This method follows from recent results on asymptotic expansions of the distribution of sums of sample means of lattice-valued random variables. In particular, we are interested in the accuracy of the normal approximation, and in the error due to the presence of the oscillating discontinuous term present in the first-order expansion, which accounts for the approximation of a discrete distribution by a continuous one. We show that the main contribution of the discontinuous term can be understood in terms of the ratio of the sample sizes used to compute the sample means. Specifically, when the ratio converges to an irrational number, or sufficiently slowly to a rational number, then the size of the discontinuous term can be greatly reduced, thus improving on the normal approximation. This theoretical property motivated the split-sample method for constructing confidence intervals for a single binomial proportion or a Poisson mean.

This work is a collaboration with Andrew Robinson and Peter Hall.

Secondary analysis of nested case-control data: Methods and practical implications

Dr Agus Salim (Department of Mathematics and Statistics, La Trobe University)

Friday 31 May 2013

Many epidemiological studies use a nested case-control (NCC) design to reduce cost while maintaining study power. However, since the sampling was conditional on the primary outcome and matching variables, re-using nested case-control data to analyse a secondary outcome is not straightforward. Recently, several authors have proposed methods that can be used to analyse secondary outcome. I will discuss the currently available methods and further propose a maximum likelihood method that where the likelihood contribution is derived by respecting the original sampling procedure with respect to the primary outcome. Using simulation study and real dataset from a Swedish cohort, we compare the performance of the different methods and assess practical implications of re-using nested case-control data in terms of statistical efficiency.

Asymptotic redundance results for the use of GLS detrending in panel unit root testing

Professor Joakim Westerlund (Deakin University)

Friday 10 May 2013

One of the most well-known facts about unit root testing is that the Dickey–Fuller test based on OLS detrended data suffers from low power, and that the use of GLS detrending can lead to substantial power gains. The present paper shows how the use of GLS rather than OLS detrended data when testing for a unit root in panels not only leads to reduced power, but actually has an order effect on the neighborhood around unit for which local asymptotic power is negligible.

Advances in automatic time series forecasting

Professor Rob J Hyndman(Monash University)

Friday 19 April 2013

Many applications require a large number of time series to be forecast completely automatically. For example, manufacturing companies often require weekly forecasts of demand for thousands of products at dozens of locations in order to plan distribution and maintain suitable inventory stocks. In population forecasting, there are often a few hundred time series to be forecast, representing various components that make up the population dynamics. In these circumstances, it is not feasible for time series models to be developed for each series by an experienced statistician. Instead, an automatic forecasting algorithm is required.

I will describe some algorithms for automatically forecasting various types of time series, including:

  • univariate non-seasonal time series
  • univariate seasonal time series
  • univariate time series with multiple seasonality
  • functional time series
  • hierarchical time series.

In addition to providing automatic forecasts when required, these algorithms also provide high quality benchmarks that can be used when developing more specific and specialized forecasting models. All the methods described are freely available in R packages.

Characterisation theorem on losses in GIX/GIY/1/n queues

Dr Vyacheslav Abramov (Swinburne University of Technology)

Friday 12 April 2013

In this talk, we discuss important property of losses in GIX/GIY/1/n queues, that is in single-server queues with general batch arrivals, general batch services and finite buffer, where we assume that inter-arrival times satisfy NWUE property (New Worse than Used in Expectation). Let M denote total number of customers that lost during a busy period. We prove that the expectation of M is the same for all n and equal to EX (the expected size of an arrival batch) if and only if:
(i) arrivals are Poisson,
(ii) size of arrival batch is lattice with span d,
(iii) size of service batch is deterministic random variable taking the value d,
(iv) b EX = ad, where a is expected inter-arrival time, and b is expected service time

This talk is based on a recent paper [V.Abramov, Oper. Res. Lett. 41 (2013), 150-152].

Latent class methods for data analysis with randomly missing covariates

Professor Murray Aitkin, Department of Mathematics and Statistics, University of Melbourne

Friday 22 March 2013

Randomly missing covariates are endemic in survey analysis in all fields. Multiple imputation (MI) methods for their accommodation are now in widespread use, assisted by many statistical package procedures.

An important requirement for MI is the imputation model - the model for the conditional distribution of the incomplete covariates given the complete covariates. Practical computational costs usually lead to the use of the multivariate normal model for this purpose, even when covariates are binary, discrete or continuous but non-normal.

This talk develops "nonparametric" analysis methods based on the latent class model for the incomplete covariates, and gives several small examples.

Drug-drug interactions: A data mining approach

Dr Musa Mammadov (University of Ballarat)

Friday 15 February 2013

Drug-drug interaction is one of the important problems of Adverse Drug Reaction (ADR). This presentation describes a data mining approach to this problem developed at the University of Ballarat. This approach is based on drug-reaction relationships represented in the form of a vector of weights. Each vector is related to a particular drug and can be considered as a pattern in causing adverse drug reactions. Optimal patterns for drugs are determined as a solution to some global optimization problem that considers, in some sense, a nonlinear modification of the linear list squares. Although this approach can be used for solving many ADR problems, we concentrate only on drug-drug interactions. Numerical implementations are carried out on different classes of reactions from the Australian Adverse Drug Reaction Advisory Committee (ADRAC) database. The results obtained extend our understanding of the drug-drug interaction from a data mining point of view.

Bernstein polynomials

Professor Terry Mills (La Trobe University)

Friday 1 February 2013

The Ukranian mathematician Sergei Natanovich Bernstein (1880-1968) wrote about probability theory and approximation theory. About 100 years ago, he introduced a sequence of polynomials that are now called Bernstein polynomials. Bernstein used these polynomials to develop a new proof of Weierstrass' approximation theorem based on ideas from probability theory.  Over the last century, many ideas have developed from this simple beginning. In this seminar, I will describe some of these developments. This seminar stems from work that is being conducted in collaboration with my colleagues Graeme Byrne, Robert Champion, Christopher Lenard, and Simon Smith.

Melbourne - Mathematics

Falling-head ponded infiltration for very nonlinear soil properties

Dr Dimetre Triadis (La Trobe University)

The century-old Green and Ampt infiltration solution represents only an extreme example of behaviour within a larger class of very nonlinear, delta function diffusivity soils. The mathematical analysis of these soils is greatly simplified by the existence of a sharp wetting front below the soil surface. Solutions for more realistic delta function soil models have recently been presented for infiltration under surface saturation without ponding.

After general formulation of the problem, solutions for a full suite of delta function soils are derived for ponded surface water depleted by infiltration. Exact expressions for the cumulative infiltration as a function of time, or the drainage time as a function of the initial ponded depth may take implicit or parametric forms, and are supplemented by simple asymptotic expressions valid for small times, and small and large initial ponded depths.

As with surface saturation without ponding, the Green-Ampt model overestimates the effect of the soil hydraulic conductivity. At the opposing extreme a low-conductivity model is identified that also takes a very simple mathematical form and appears to be more accurate than the Green-Ampt model for large ponded depths. Between these two, the nonlinear limit of Gardner's soil is recommended as a physically valid first approximation. Relative discrepancies between different soil models are observed to reach a maximum for intermediate values of the dimensionless initial ponded depth, and in general are smaller than for surface saturation without ponding.

In this talk, I'll look at the relationship between uncorrelation and independence, and deduce the conditions that make these two concepts equivalent. These conditions are then "projected" to obtain the common phase transition phenominon for specifying the distributions of stochastic processes, including the well-known Poisson process.

An introduction to Yang-Baxter maps, 3D consistency and discrete integrable systems

Dr Theodoros Kouloukas (La Trobe University and University of Patras, Greece)

2:00pm, Friday 23 May, 2014
Room 310 (Access Grid Room), Physical Sciences 2, La Trobe University, Melbourne (Bundoora) Campus

Theodoros KouloukasWe review some recent developments in the theory of Yang-Baxter maps and three-dimensional consistent equations on quadgraphs (discrete analogues of partial differential equations). Some fundamental constructions of Yang-Baxter maps that admit Lax representations in terms of polynomial matrices are presented. We study their properties and we investigate the connection with a special class of 3D consistent equations on quadgraphs. By considering periodic initial value problems on two-dimensional lattices, we derive integrable maps which preserve the spectrum of their monodromy matrix.

Boundary-value problems for the Ricci flow

Dr Artem Pulemotov (University of Queensland)

12:00pm, Monday 5 May, 2014
Room 212 (Seminar Room 1), Physical Sciences 2, La Trobe University, Melbourne (Bundoora) Campus

Artem PulemotovThe Ricci flow is a differential equation describing the evolution of a Riemannian manifold (i.e., a "curved" geometric object) into an Einstein manifold (i.e., an object with "constant" curvature). This equation is particularly famous for its key role in the proof of the Poincaré Conjecture. Understanding the Ricci flow on manifolds with boundary is a difficult problem with applications to a variety of fields, such as topology and mathematical physics. The talk will survey the current progress towards the resolution of this problem. In particular, we will discuss new results concerning spaces with symmetries.

Integrable systems, Symplectic geometry and Toric action in Projective and c-Projective geometry

Dr Vladimir Matveev (University of Jena, Germany)

3:00pm, Wednesday 30 April, 2014
Room 313 (3rd year Teaching Room 1), Physical Sciences 2, La Trobe University, Melbourne (Bundoora) Campus

Vladimir MatveevI will show some applications of the standard techniques of integrable systems in projective and c-projective geometry. I will show that under some nondegeneracy assumption projectively and c-projectively equivalent metrics (I will explain what they are) on a closed manifold generate a commutative $R^n$-action on the manifold. In the c- projective case, this action is reduced to the torus action, and the standard applications of the theory of toric actions on symplectic manifolds already gives nontrivial interesting results and in particular, describes completely the topology of such manifolds. In the projective case, the situation is slightly more complicated in general, but also in this case the existence of the action plus some additional results help a lot in understanding of topology of the possible manifolds. Though the title and abstract may sound slightly technical, the talk is planned to be very geometric and will not require too much above the standard mathematical background.

The results are a part of the ongoing projects with A. Bolsinov, D. Calderbank, M. Eastwood, K. Neusser and S. Rosemann.

(Non)integrability of natural Hamiltonian systems: tricks and methods

Dr Vladimir Matveev (University of Jena, Germany)

12:00pm, Monday 28 April, 2014
Room 212 (Seminar Room 1), Physical Sciences 2, La Trobe University, Melbourne (Bundoora) Campus

I will consider natural Hamiltonian systems with two degrees of freedom and will discuss the existence and the nonexistence of integrals that are polynomial in momenta. This is a classical topic, and in my talk I will give a historical overview and explain the classical and the modern motivation. In the mathematical part of my talk, I will mostly discuss the following questions: given a metric, how to prove the (non)existence of an integral of a given degree, and how to find it explicitly? I will explain the classical and modern methods to study this question. As an application, I will present new systems admitting an integral of degree 3 in momenta and a solution of a problem of J.Brink.

The first result of the talk is joint with V.Shevchishin, and the second, with B.Kruglikov.

Pi-Day Celebration

Dr Katherine Seaton (La Trobe University)

000_0841newAround the world 3.14 (March 14) is celebrated as Pi-Day. This year, we will have an interactive seminar on Pi -Day.

Katherine plans to unveil her creation of a crocheted hyperbolic plane (in the method of Daina Taimina), and explain what Pi has to do with the area of triangles on the hyperbolic plane, as well as the Buffon's Needle experiment for a geometric-probability determination of Pi.

000_0842newThere will be door-prizes and activities and possibly food.

If you would like to crochet your own hyperbolic pseudo-sphere (and you can do double crochet) bring a crochet hook and yarn. The cheapest acrylic yarn works best. Don't laugh – the last three joint meetings of the AMS and the MAA has had a special session on Mathematics and Mathematics Education in Fiber Arts.

000_0844new000_0843new

Research projects

Wednesday February 26 2014

gavinGavin Rossiter (Grant & Yuri). Piecewise linear periodic homeomorphisms of the plane.

In 1993 Morton Brown published an article describing a rather curious piecewise linear, periodic (of order 9), homeomorphism of the plane given by H(x, y)=(|x|-y, x). In a letter to Brown, Donald Knuth provided a combinatorial proof of the periodicity; however he posed the question of whether an 'insightful' proof of the periodicity could be found.

We provide such an insight, by explaining Brown's map in terms of the finite subgroups of SL(2, Z) and PSL(2, Z), and we then use this interpretation to build similar maps of order 5 and 7.

hughHugh Marman (Marcel & Tomasz). Finite axiomatisation of finitely generated quasivarieties of graphs.

All finitely axiomatisable quasivarieties of graphs adhere to a particular structure, being defined by excluded substructures. Of the finitely-generated quasivarieties only five (generated by a single simple graph each) adhere to this structure. Accordingly, these five are the only finitely-generated quasivarieties of graphs that admit a finite axiomatisation.

michaelMichael Kovacevic (Grant & Yuri). A little point set topology.

The aim of this project was to study some advanced concepts of the classical point set topology to form a solid foundation for my starting Honours project. I studied various axioms of separation and connectedness, para- and local compactness, etc and investigated the connection between them. A selection of results will be presented.

AMSI Vacation Research Scholarship Talks

Friday 7 February 2014

Adam Wood (Grant & Yuri) - Geodesics in the real 3 dimensional Heisenberg group

The aim of this investigation is to classify the geodesics in the Heisenberg group and obtain a visual representation of the classified curves.  This will be done using a left invariant metric.

Charles Gray (Brian & Jane) - The Digraph Lattice

Graph homomorphisms play an important role in graph theory and its applications. For example, the $n$-colourability of a graph $G$ is equivalent to the existence of a graph homomorphism from $G$ to the complete graph $K_n$. Using lattice theory, we re-examine some nice proofs and problems explored by Hell and Nesetril on the order induced by graph homomorphisms (Graphs and Homomorphisms, Oxford, 2004).

Tim Koussas (Grant & Yuri) - Conway's Thrackle Conjecture

Graph theory is a well-known branch of mathematics, comprising topics of interest in both pure and applied mathematics. The study of thrackles, which are a certain type of graph drawing, is an example of pure mathematics in graph theory. Thrackles are the creation of John H. Conway, who is currently a Professor of Mathematics at Princeton University. Most famously associated with thrackles is Conway's thrackle conjecture, a problem Conway posed over fifty years ago that as yet remains unproved. This project focuses on a proof of the conjecture for a particular class of thrackles known as spherical thrackles.

Krystyn Villaflores (Agus) - Absolute Risk Estimation Using Nested Case-Control Data

Due to financial constraints, there has been a trend in using only a subset of subjects from large cohorts. We have developed an approach for utilising data obtained under a nested case-control (NCC) design to estimate absolute risk. Using realistic simulations, we have found that this method outperforms the Langholz-Borgan approach when used on matched NCC data.

Rupert Kuveke (Paul) - A comparison of several methods for finding Value at Risk

My talk is on the financial risk measure Value at Risk (VaR). I compute VaR using several methods - Historical Simulation, Filtered Historical Simulation, and fitting a GARCH(1,1) model with a standardised t distribution - applied to S&P 500 Index data. I then compare the results.

On the classification of Quantum groups

Professor Alexander Stolin (University of Gothenburg, Sweden)

Friday 15 November 2013

000_0826nWe will explain an approach to classification of certain quantum groups.

Let g be a complex simple Lie algebra. A quantum group is a Hopf algebra over C[[h]], which has g as its classical limit. To obtain it we, roughly speaking, set h=0. More precisely, being a classical limit of a Hopf algebra, g becomes a Lie bialgebra. It is well-known that simple Lie algebras are classified by means of the so-called Dynkin diagrams. In 1982, Belavin and Drinfeld classified the corresponding Lie bialgebras by means of the Belavin-Drinfeld triples, which can be described as two isomorphic subdiagrams of the Dynkin diagram of g (they are called triples because the isomorphism between the subdiagrams matters).

In order to classify the corresponding quantum groups we introduced further combinatorial data, which we called Belavin-Drinfeld cohomologies. There are two types of the BD-cohomologies, namely nontwisted and twisted versions.

In my talk, I will explain how to describe BD-cohomologies for special linear and orthogonal algebras. No prior knowledge except for the standard facts about simple Lie algebras is needed.

This is a joint work Iulia Pop (Chalmers University of Technology, Sweden).

Integrable-like behaviour in the Fermi-Pasta-Ulam model

Dr Heleni Christodoulidi (Patras University, Greece)

Monday 18 November 2013

In 1950's Fermi, motivated by fundamental questions of statistical mechanics, started a numerical experiment in collaboration with Pasta and Ulam to test the ergodic properties of nonlinear dynamical systems. The chosen so-called FPU system was a one dimensional chain of N nonlinear coupled oscillators, described by a quadratic potential of nearby particle interactions plus a cubic perturbation. Fermi's ergodic hypothesis states that a system under an arbitrarily small perturbing force becomes generically ergodic. Starting with the longest wavelength normal mode, the FPU system showed a non-ergodic behavior. Many pioneer works followed for the explanation of this paradox. The most prominent of them have been the work of Zabusky and Kruskal (1965), with evidence of connection between the FPU system in the thermodynamic limit and the pde Korteweg-de Vries, and the work of Flaschka et al. (1982), where the authors discovered a similar behavior of the FPU model in the Toda chain. Recent developments show a more complete picture of the problem and its explanation.

Classifying finite p-groups by coclass

Dr Heiko Dietrich (Monash University)

Friday 8 November 2013

000_0824One of the major themes in group theory is the classification of groups up to isomorphism. A famous example is the 'Classification Theorem of Finite Simple Groups', which classifies all finite simple groups - the basic building blocks of all finite groups. However, even if we restrict attention to the least complicated finite simple group, the cyclic group of prime order p, it is still an intricate problem to put these groups together in order to construct all groups of p-power order, so-called finite p-groups, up to isomorphism.

One approach for classifying finite p-groups is to fix the order, say p^n, and to classify all groups having this order. However, already for small n this problem becomes intractable. In this talk, we discuss an alternative approach, namely, a classification of p-groups by "coclass". If the coclass is fixed, say r, then the finite p-groups of coclass r (up to isomorphism) define an infinite graph, the so-called coclass graph G(p,r). This graph has some remarkable periodic structures, which are reflected in the structures of the groups.  We will discuss the structure of this graph, some classification results, and some open conjectures.

Generalisations of Wilson's Theorem for Double-, Hyper-, Sub-, and Super-factorials

Dr Grant Cairns (La Trobe University)

000_0813Friday 1 November 2013

Wilson's theorem, which has been known for many centuries, is a basic fact about prime numbers: it says that p is prime if and only if (p-1)!+1 is divisible by p. These days there are four standard generalizations of the factorial function: the double factorial, the hyperfactorial, the subfactorial and the superfactorial. In this talk we recall the definitions of these exotic factorial functions, and we present generalizations of Wilson's theorem for them.

This talk should be accessible to undergraduate students. It is joint work with Christian Aebi (College Calvin, Geneva, Switzerland), and has been accepted for publication in the Amer. Math. Monthly.

Slow Variation in Population Models

Associate Professor John J Shepherd (RMIT University)

Friday 18 October 2013

You are welcome to attend the following joint RMIT-LaTrobe seminar at La Trobe University.

When studying the evolution of populations modelled by differential equations (or systems of same), we are usually faced with equations in which, for mathematical simplicity, the defining model coefficients (parameters) are constants. This simplification makes the mathematical problems within such models relatively straightforward to solve analytically.

However, when these models are modified to allow for time variation in the model parameters, such analytic solution is usually not possible, and numerical solution methods must be employed, with the limitations implicit in these. In particular, general trends and properties are hard to detect in numerically generated solutions.

When the parameter variation is slow, multiscaling methods may be used to construct quite general approximate analytic expressions for the evolving populations. These expressions have proved to be quite accurate, when compared with the results of numerical calculations. However, these approximations have been shown to fail in neighbourhoods of points we term transition points, where the model usually undergoes a fundamental change in solution behaviour.

In this talk, I briefly outline the multiscaling process and show how it can be used to approximate the solutions of a range of well-known population models from transition points. I will also illustrate how these approximations can be linked, to provide an approximation for the evolving population right through such points.

Following the seminar light refreshments will be provided, giving the opportunity for discussions with RMIT colleagues.

Three Films Good, Four Films Bad!

Dr Paul Grassia (CEAS, University of Manchester, UK)

Friday 9 August 2013

000_0792-editSince the work of Plateau (1873) and Kelvin (1887) it has been well known that films in a static foam meet threefold (and never fourfold) along junctions, now known as Plateau borders. The fourfold state is energetically unstable -- and will lead to bubble rearrangements producing new films, such that the threefold film meeting rule is recovered. Foam rheology concerns the application of strain to a foam, a process which causes certain films to grow but others to shrink. A detailed description of foam rheology requires tracking the evolving geometry of the strained films, which can induce the `forbidden' fourfold film meeting states, from which the foam must subsequently relax. The foam's inherent relaxation rate following the fourfold state can differ substantially from the externally imposed rate of strain driving foam flow and deformation: moreover the relaxation rate can be set either by viscous dissipation effects or by departures from physicochemical equilibrium. The effects of deforming foam at different rates will be considered here. Strong viscous dissipation can delay the onset of the fourfold film state, postponing the foam's relaxation, which leads to highly elongated bubbles and possibly even produces film bursting. Slow physicochemical equilibrium might suppress the fourfold film state -- and the subsequent relaxation -- altogether. Even weak departures from physicochemical equilibrium are shown to lead to non-trivial, non-linear rheological behaviour.

Seminar on graph theory and lie algebras ("Grant and Yuri's Seminar")

Semester One and Two, 2013

We will cover a broad range of topics from graph theory, game theory and combinatorics to Lie algebras and basic topology.

Most of the topics will require no or minimal preliminary knowledge and will be accessible to enthusiastic and motivated third year students with an interest in and passion for all sorts of Mathematics.

Four-valent graphs with a cross structure: Euler tours, chord diagrams, embeddings in surfaces

Professor Denis Ilyutko (Moscow University, Russia)

Friday 7 June

We consider finite connected four-valent graphs with a cross structure, i.e. graphs with a pairing of the four half-edges at each vertex. Graphs with a cross structure have Euler tours of different types depending on travelling through a vertex: we can pass from a half-edge to the opposite half-edge and we can pass from a half-edge to a non-opposite half-edge. In turn Euler tours are encoded by chord diagrams. There are criterion in terms of chord diagrams telling us when a graph with a cross structure can be embedded in the plane (Cairns–Elton and Read–Rosenstiehl) and surfaces with a genus g (Manturov). These criterion use different approaches depending on types of Euler tours in question. In the first part of the talk we consider a connection between these criterion. The second part of the talk is devoted to simple graphs and chord diagrams. It is known that there are graphs which are not circle graphs (not intersection graphs of chord diagrams) (Bouchet). But in spite of this fact many properties which circle graphs have remain true for simple graphs.

Abelian groups: their roles in the pathological behaviour of braids and the downfall of democracy!

A joint La Trobe-RMIT Math Colloquium.

Speaker: Professor Brian Davey (La Trobe University)

Friday 24 May 2013

A famous theorem in social choice theory, due to Kenneth Arrow, says that there is no "reasonable" system for combining preference orders. For example, in an election we may want a system for taking the voters' various rankings of the candidates in order of preference and obtaining a single ranking of the candidates.

I will discuss an unexpected connection between Arrow's Theorem and my research into braids (a kind of partially ordered set). Indeed, my study of braids led to a theorem about finite abelian groups that can be applied to simplify the proof of Arrow's Theorem; a surprising interconnection between three apparently quite different parts of mathematics.

The talk will be aimed at a general audience and will assume nothing other than some elementary facts about abelian groups.

Mathematica in Research and the Classroom

Craig Bauling, Wolfram Research, Champaign, USA

Friday 19 April 2013

Craig BaulingJoin Craig Bauling as he guides us through the capabilities of Mathematica 9. Craig will demonstrate the key features that are directly applicable for use in teaching and research. Topics of this technical talk include:

  • Natural language input (http://www.wolfram.com/broadcast/screencasts/free-form-input/)
  • Market leading statistical analysis functionality
  • Predictive interface that guides users to appropriate next steps
  • 2D and 3D information visualisation
  • Creating interactive models that encourage student participation and learning
  • Practical applications in Engineering, Chemistry, Physics, Finance, Biology, Economics and Mathematics
  • On-demand Chemical, Biological, Economic, Finance and Social data
  • Mathematica as an efficient programming language.

Prior knowledge of Mathematica is not required - new users are encouraged. Current users will benefit from seeing the many improvements and new features of Mathematica 9. This is a great opportunity to get people not experienced with Mathematica involved and excited. Students are welcome as well.

Modelling time-dependent partial differential equations using a moving mesh approach based on conservation

Dr Tamsin E. Lee, University of Reading, UK

Friday 22 March 2013

Tamsin Lee

One of the advantages of moving mesh methods for the numerical solution of partial differential equations is their ability to track moving boundaries. In this talk we propose a velocity-based moving mesh method in which we primarily focus on moving the nodes so as to preserve local mass fractions. To recover the solutions from the mesh we use an integral approach which avoids altering the structure of the original equations when incorporating the velocity.

Tamsin LeeWe apply our method to a range of moving boundary problems: the porous medium equation; Richards' equation; the Crank-Gupta problem; an avascular tumour growth model. We compare the numerical results to exact solutions where possible, or to results obtained from other methods, and find that our approach is accurate. We apply three different strategies to the tumour growth model, which enables us to make comparisons between the different approaches. We conclude that our moving mesh method can offer equal accuracy and better resolution, whilst offering greater flexibility than a standard fixed mesh approach.

Binet-Legendre metric

Professor Vladimir Matveev, University of Jena, Germany

Friday 15 March 2013

I will explain a simple construction from elementary convex geometry that associates a Riemannian metric g_F (called the Binet-Legendre metric) to a given Finsler metric F on a smooth manifold M (I explain what is it). The transformation F → g_F is C0-stable and has good smoothness properties. The Riemannian metric g_F also behaves nicely under conformal or bi-Lipshitz deformation of the Finsler metric F that makes it a powerful tool in Finsler geometry. This will be illustrated by solving a number of named problems in Finsler geometry and giving short proofs of known results. In particular, we will answer a question of Matsumoto about local conformal mapping between two Minkowski spaces and will describe all the possible conformal self-maps and all self-similarities of a Finsler manifold. We will also classify all compact conformally flat Finsler manifolds and solve the conjecture of Deng and Hu on locally symmetric Finsler spaces.

Einstein metrics are geodesically rigid - Riemannian Geometry and Lie Algebra

Professor Vladimir Matveev, University of Jena, Germany

Friday 15 March 2013

Certain astronomical observations allow us to determine only the UNPARAMETERISED geodesics of the space-time metric. This naturally leads to the following mathematical question explicitly asked by Weyl and Ehlers: how to determine a metric by its unparameterised geodesics? I will show that generally this problem cannot be solved uniquely (by showing examples of Lagrange and Levi-Civita of two different metrics with the same geodesics). The main mathematical theorem of my talk (I will give a rigorous proof) will be that 4D Einstein metrics of nonconstant curvature are geodesically rigid, in the sense that unparameterised geodesics determine such metrics uniquely. This is a result of joint work with V. Kiosak. I will also show some purely mathematical applications, one of which is a solution of a problem of S. Lie explicitly stated in 1882 (this part of the talk is based on a joint work R. Bryant and G. Manno).

Complex dynamics and statistics of multidimensional Hamiltonian systems

Dr Tassos Bountis, University of Patras, Greece

Friday 8 March 2013

Tassos BountisHamiltonian systems have been studied extensively by mathematicians and physicists for more than a century producing a wealth of theoretical results, which were later thoroughly explored by numerical and laboratory experiments. Especially in the case of few degrees of freedom, one might claim that most of their dynamical and statistical properties are well understood. And yet, in many dimensions, all the way to the thermodynamic limit, there remain many secrets to be revealed and surprising phenomena to be discovered.

Tassos BountisIn this lecture, I will try to summarize the work I have been doing in the last few years with my team at Patras on complex problems of multidimensional Hamiltonian systems. I will first present the method of GALI indices to analyze the tangent space dynamics of individual orbits and explore some remarkable localization properties of one–dimensional lattices in configuration as well as Fourier space. In the second part of my talk, I will proceed more globally and study probability density functions (pdfs) of chaotic orbits in the spirit of the central limit theorem. In particular, I will demonstrate that weakly chaotic orbits often obey non-extensive statistical mechanics for long times, until they finally enter a regime of strong chaos, where pdfs tend to Gaussians, Boltzmann Gibbs theory prevails and the system tends to thermodynamic equilibrium. Recent applications to solid state physics and astronomy will be discussed.

Bendigo

Markov chains

Emeritus Professor Terry Mills

Friday 23 November 2012

There are many systems that change from one state to another, such as the weather or the economy or a person's health. Markov chains are mathematical models that are often used to describe such systems. They were introduced to the scientific literature by the Russian mathematician A.A. Markov about 100 years ago.

During the last century, the theory surrounding these models has grown immensely as have the applications. In this seminar, I will review the definition of a Markov chain, and talk a little about the theory associated with these models. However, the main thrust of the talk is to illustrate the wide range of applications of these models. In 2012, Markov chains are studied in Year 12 Mathematical Methods, and this is about the level at which the seminar will be presented. The talk is based on collaborative work with Ka Chan and Christopher Lenard.

Did the Swing Rioters do better than other male convicts transported to Tasmania?

Dr Rebecca Kippen (Centre for Health and Society, The University of Melbourne) [Co-author: Janet McCalman]

Friday 5 October 2012

This paper reports on a pilot study of a sample of male convicts transported to Van Diemen's Land between 1831 and 1846. The pilot aims to discern whether a select group of 332 political offenders – the 'Swing Rioters' – were different from other convicts before transportation, in their experience under sentence, and after sentence. Convict life courses have been traced by a group of expert volunteers.

We find that the Swing Rioters were clearly different in their social and human capital and this, combined with the wide social support for their cause, mitigated the stigma of convictism and the intensity of interactions between the convict and penal discipline. Even controlling for background characteristics, the Swing Rioters had fewer offences and experienced less punishment than other convicts. They also remained more visible after sentence, were more likely to establish viable family lines, and lived longer than other convicts.

The role of modelling in planning cancer services

Terry Mills (Senior Cancer Data Analyst, Loddon Mallee Integrated Cancer Service)

Friday 25 May 2012

In this seminar I will present examples that illustrate the role played by mathematical and statistical models in planning cancer services in the Loddon Mallee Region. There are suggestions for the university curriculum that arise from this experience. By and large, the presentation will be equation-free.

Stick-slip particle behaviour in granular beds

Ashley Dyson (La Trobe University)

Friday 4 May 2012

Over the 2011/12 summer vacation period Ashley completed a CSIRO vacation scholarship at the Centre of Mathematics, Informatics and Statistics, located in Clayton, Victoria. During this time, Ashley participated in a project to categorise phases of the stick-slip phenomenon common to all stick-slip events in granular media. Ashley will speak on the inter-particle network of Newtonian forces that underpin these systems.

Print version Close