Metrika [SJR: 0.839] [H-I: 22] [3 followers] Follow Hybrid journal (It can contain Open Access articles) ISSN (Print) 1435-926X - ISSN (Online) 0026-1335 Published by Springer-Verlag [2210 journals] |
- Admissibility in non-regular family under squared-log error loss
- Abstract: Abstract
Consider an estimation problem under the squared-log error loss function in a one-parameter non-regular distribution when the endpoint of the support depends on an unknown parameter. The purpose of this paper is to give sufficient conditions for a generalized Bayes estimator of a parametric function to be admissible. Some examples are given.
PubDate: 2015-02-01
- Abstract: Abstract
Consider an estimation problem under the squared-log error loss function in a one-parameter non-regular distribution when the endpoint of the support depends on an unknown parameter. The purpose of this paper is to give sufficient conditions for a generalized Bayes estimator of a parametric function to be admissible. Some examples are given.
- A characterization of the innovations of first order autoregressive models
- Abstract: Abstract
Suppose that
\(Y_t\)
follows a simple AR(1) model, that is, it can be expressed as
\(Y_t= \alpha Y_{t-1} + W_t\)
, where
\(W_t\)
is a white noise with mean equal to
\(\mu \)
and variance
\(\sigma ^2\)
. There are many examples in practice where these assumptions hold very well. Consider
\(X_t = e^{Y_t}\)
. We shall show that the autocorrelation function of
\(X_t\)
characterizes the distribution of
\(W_t\)
.
PubDate: 2015-02-01
- Abstract: Abstract
Suppose that
\(Y_t\)
follows a simple AR(1) model, that is, it can be expressed as
\(Y_t= \alpha Y_{t-1} + W_t\)
, where
\(W_t\)
is a white noise with mean equal to
\(\mu \)
and variance
\(\sigma ^2\)
. There are many examples in practice where these assumptions hold very well. Consider
\(X_t = e^{Y_t}\)
. We shall show that the autocorrelation function of
\(X_t\)
characterizes the distribution of
\(W_t\)
.
- Optimal bounds on expectations of order statistics and spacings from
nonparametric families of distributions generated by convex transform
order- Abstract: Abstract
Assume that
\(X_1,\ldots , X_n\)
are i.i.d. random variables with a common distribution function
\(F\)
which precedes a fixed distribution function
\(W\)
in the convex transform order. In particular, if
\(W\)
is either uniform or exponential distribution function, then
\(F\)
has increasing density and failure rate, respectively. We present sharp upper bounds on the expectations of single order statistics and spacings based on
\(X_1,\ldots , X_n\)
, expressed in terms of the population mean and standard deviation, for the family of all parent distributions preceding
\(W\)
in the convex transform order. We also characterize the distributions which attain the bounds, and specify the general results for the distributions with increasing density function.
PubDate: 2015-02-01
- Abstract: Abstract
Assume that
\(X_1,\ldots , X_n\)
are i.i.d. random variables with a common distribution function
\(F\)
which precedes a fixed distribution function
\(W\)
in the convex transform order. In particular, if
\(W\)
is either uniform or exponential distribution function, then
\(F\)
has increasing density and failure rate, respectively. We present sharp upper bounds on the expectations of single order statistics and spacings based on
\(X_1,\ldots , X_n\)
, expressed in terms of the population mean and standard deviation, for the family of all parent distributions preceding
\(W\)
in the convex transform order. We also characterize the distributions which attain the bounds, and specify the general results for the distributions with increasing density function.
- A robust two-stage procedure in Bayes sequential estimation of a
particular exponential family- Abstract: Abstract
The problem of Bayes sequential estimation of the unknown parameter in a particular exponential family of distributions is considered under linear exponential loss function for estimation error and a fixed cost for each observation. Instead of fully sequential sampling, a two-stage sampling technique is introduced to solve the problem in this paper. The proposed two-stage procedure is robust in the sense that it does not depend on the parameters of the conjugate prior. It is shown that the two-stage procedure is asymptotically pointwise optimal and asymptotically optimal for a large class of the conjugate priors. A simulation study is conducted to compare the performances of the two-stage procedure and the purely sequential procedure.
PubDate: 2015-02-01
- Abstract: Abstract
The problem of Bayes sequential estimation of the unknown parameter in a particular exponential family of distributions is considered under linear exponential loss function for estimation error and a fixed cost for each observation. Instead of fully sequential sampling, a two-stage sampling technique is introduced to solve the problem in this paper. The proposed two-stage procedure is robust in the sense that it does not depend on the parameters of the conjugate prior. It is shown that the two-stage procedure is asymptotically pointwise optimal and asymptotically optimal for a large class of the conjugate priors. A simulation study is conducted to compare the performances of the two-stage procedure and the purely sequential procedure.
- Minimum distance lack-of-fit tests under long memory errors
- Abstract: Abstract
This paper discusses some tests of lack-of-fit of a parametric regression model when errors form a long memory moving average process with the long memory parameter
\(0<d<1/2\)
, and when design is non-random and uniform on
\([0,1]\)
. These tests are based on certain minimized distances between a nonparametric regression function estimator and the parametric model being fitted. The paper investigates the asymptotic null distribution of the proposed test statistics and of the corresponding minimum distance estimators under minimal conditions on the model being fitted. The limiting distribution of these statistics are Gaussian for
\(0<d<1/4\)
and non-Gaussian for
\(1/4<d<1/2\)
. We also discuss the consistency of these tests against a fixed alternative. A simulation study is included to assess the finite sample behavior of the proposed test.
PubDate: 2015-02-01
- Abstract: Abstract
This paper discusses some tests of lack-of-fit of a parametric regression model when errors form a long memory moving average process with the long memory parameter
\(0<d<1/2\)
, and when design is non-random and uniform on
\([0,1]\)
. These tests are based on certain minimized distances between a nonparametric regression function estimator and the parametric model being fitted. The paper investigates the asymptotic null distribution of the proposed test statistics and of the corresponding minimum distance estimators under minimal conditions on the model being fitted. The limiting distribution of these statistics are Gaussian for
\(0<d<1/4\)
and non-Gaussian for
\(1/4<d<1/2\)
. We also discuss the consistency of these tests against a fixed alternative. A simulation study is included to assess the finite sample behavior of the proposed test.
- Linearity of regression for overlapping order statistics
- Abstract: Abstract
We consider a problem of characterization of continuous distributions for which linearity of regression of overlapping order statistics,
\(\mathbb {E}(X_{i:m} X_{j:n})=aX_{j:n}+b\)
,
\(m\le n\)
, holds. Due to a new representation of conditional expectation
\(\mathbb {E}(X_{i:m} X_{j:n})\)
in terms of conditional expectations
\(\mathbb {E}(X_{l:n} X_{j:n})\)
,
\(l=i,\ldots ,n-m+i\)
, we are able to use the already known approach based on the Rao-Shanbhag version of the Cauchy integrated functional equation. However this is possible only if
\(j\le i\)
or
\(j\ge n-m+i\)
. In the remaining cases the problem essentially is still open.
PubDate: 2015-02-01
- Abstract: Abstract
We consider a problem of characterization of continuous distributions for which linearity of regression of overlapping order statistics,
\(\mathbb {E}(X_{i:m} X_{j:n})=aX_{j:n}+b\)
,
\(m\le n\)
, holds. Due to a new representation of conditional expectation
\(\mathbb {E}(X_{i:m} X_{j:n})\)
in terms of conditional expectations
\(\mathbb {E}(X_{l:n} X_{j:n})\)
,
\(l=i,\ldots ,n-m+i\)
, we are able to use the already known approach based on the Rao-Shanbhag version of the Cauchy integrated functional equation. However this is possible only if
\(j\le i\)
or
\(j\ge n-m+i\)
. In the remaining cases the problem essentially is still open.
- Optimal crossover designs in a model with self and mixed carryover effects
with correlated errors- Abstract: Abstract
We determine optimal crossover designs for the estimation of direct treatment effects in a model with mixed and self carryover effects. The model also assumes that the errors within each experimental unit are correlated following a stationary first-order autoregressive process. The paper considers situations where the number of periods for each experimental unit is at least four and the number of treatments is greater or equal to the number of periods.
PubDate: 2015-02-01
- Abstract: Abstract
We determine optimal crossover designs for the estimation of direct treatment effects in a model with mixed and self carryover effects. The model also assumes that the errors within each experimental unit are correlated following a stationary first-order autoregressive process. The paper considers situations where the number of periods for each experimental unit is at least four and the number of treatments is greater or equal to the number of periods.
- A dynamic stress–strength model with stochastically decreasing
strength- Abstract: Abstract
We consider a dynamic stress–strength model under external shocks. The strength of the system decreases with time and the failure occurs when the strength finally vanishes. Furthermore, there is another cause of the system failure induced by an external shock process. Each shock is characterized by the corresponding stress. If the magnitude of the stress exceeds the current strength, then the system also fails. We assume that the initial strength of the system and its decreasing drift pattern are random. We derive the survival function of the system and interpret the time-dependent dynamic changes of the random quantities which govern the reliability performance of the system.
PubDate: 2015-01-15
- Abstract: Abstract
We consider a dynamic stress–strength model under external shocks. The strength of the system decreases with time and the failure occurs when the strength finally vanishes. Furthermore, there is another cause of the system failure induced by an external shock process. Each shock is characterized by the corresponding stress. If the magnitude of the stress exceeds the current strength, then the system also fails. We assume that the initial strength of the system and its decreasing drift pattern are random. We derive the survival function of the system and interpret the time-dependent dynamic changes of the random quantities which govern the reliability performance of the system.
- Smoothing spline regression estimation based on real and artificial data
- Abstract: Abstract
In this article we introduce a smoothing spline estimate for fixed design regression estimation based on real and artificial data, where the artificial data comes from previously undertaken similar experiments. The smoothing spline estimate gives different weights to the real and the artificial data. It is investigated under which conditions the rate of convergence of this estimate is better than the rate of convergence of the ordinary smoothing spline estimate applied to the real data only. The finite sample size performance of the estimate is analyzed using simulated data. The usefulness of the estimate is illustrated by applying it in the context of experimental fatigue tests.
PubDate: 2015-01-11
- Abstract: Abstract
In this article we introduce a smoothing spline estimate for fixed design regression estimation based on real and artificial data, where the artificial data comes from previously undertaken similar experiments. The smoothing spline estimate gives different weights to the real and the artificial data. It is investigated under which conditions the rate of convergence of this estimate is better than the rate of convergence of the ordinary smoothing spline estimate applied to the real data only. The finite sample size performance of the estimate is analyzed using simulated data. The usefulness of the estimate is illustrated by applying it in the context of experimental fatigue tests.
- Bivariate distributions with conditionals satisfying the proportional
generalized odds rate model- Abstract: Abstract
New bivariate models are obtained with conditional distributions (in two different senses) satisfying the proportional generalized odds rate (PGOR) model. The PGOR semi-parametric model includes as particular cases the Cox proportional hazard rate (PHR) model and the proportional odds rate (POR) model. Thus the new bivariate models are very flexible and include, as particular cases, the bivariate extensions of PHR and POR models. Moreover, some well known parametric bivariate models are also included in these general models. The basic theoretical properties of the new models are obtained. An application to fit a real data set is also provided.
PubDate: 2015-01-08
- Abstract: Abstract
New bivariate models are obtained with conditional distributions (in two different senses) satisfying the proportional generalized odds rate (PGOR) model. The PGOR semi-parametric model includes as particular cases the Cox proportional hazard rate (PHR) model and the proportional odds rate (POR) model. Thus the new bivariate models are very flexible and include, as particular cases, the bivariate extensions of PHR and POR models. Moreover, some well known parametric bivariate models are also included in these general models. The basic theoretical properties of the new models are obtained. An application to fit a real data set is also provided.
- One-sample Bayesian prediction intervals based on progressively type-II
censored data from the half-logistic distribution under progressive stress
model- Abstract: Abstract
Based on progressively type-II censored sample, we discuss Bayesian interval prediction under progressive stress accelerated life tests. The lifetime of a unit under use condition stress is assumed to follow the half-logistic distribution with a scale parameter satisfying the inverse power law. Prediction bounds of future order statistics are obtained. A simulation study is performed and numerical computations are carried out, based on two different progressive censoring schemes. The coverage probabilities and average interval lengths of the confidence intervals are computed via a Monte Carlo simulation.
PubDate: 2015-01-07
- Abstract: Abstract
Based on progressively type-II censored sample, we discuss Bayesian interval prediction under progressive stress accelerated life tests. The lifetime of a unit under use condition stress is assumed to follow the half-logistic distribution with a scale parameter satisfying the inverse power law. Prediction bounds of future order statistics are obtained. A simulation study is performed and numerical computations are carried out, based on two different progressive censoring schemes. The coverage probabilities and average interval lengths of the confidence intervals are computed via a Monte Carlo simulation.
- Fisher information in censored samples from folded and unfolded
populations- Abstract: Abstract
Fisher information (FI) forms the backbone for many parametric inferential procedures and provides a useful metric for the design of experiments. The purpose of this paper is to suggest an easy way to compute the FI in censored samples from an unfolded symmetric distribution and its folded version with minimal computation that involves only the expectations of functions of order statistics from the folded distribution. In particular we obtain expressions for the FI in a single order statistic and in Type-II censored samples from an unfolded distribution and the associated folded distribution. We illustrate our results by computing the FI on the scale parameter in censored samples from a Laplace (double exponential) distribution in terms of the expectations of special functions of order statistics from exponential samples. We discuss the limiting forms and illustrate applications of our results.
PubDate: 2015-01-04
- Abstract: Abstract
Fisher information (FI) forms the backbone for many parametric inferential procedures and provides a useful metric for the design of experiments. The purpose of this paper is to suggest an easy way to compute the FI in censored samples from an unfolded symmetric distribution and its folded version with minimal computation that involves only the expectations of functions of order statistics from the folded distribution. In particular we obtain expressions for the FI in a single order statistic and in Type-II censored samples from an unfolded distribution and the associated folded distribution. We illustrate our results by computing the FI on the scale parameter in censored samples from a Laplace (double exponential) distribution in terms of the expectations of special functions of order statistics from exponential samples. We discuss the limiting forms and illustrate applications of our results.
- Exact likelihood inference for the two-parameter exponential distribution
under Type-II progressively hybrid censoring- Abstract: Abstract
Hybrid censoring schemes are commonly used in life-testing experiments to reduce the experimental time and the cost. A Type-II progressive hybrid censoring scheme (PHCS) was introduced by Kundu and Joarder (Comput Stat Data Anal 50:2509–2528, 2006) that combines progressive Type-II censoring and Type-I censoring. In this paper, we consider the statistical inference of a two-parameter exponential distribution under the Type-II PHCS. The conditional maximum likelihood estimates (MLEs) of the model parameters and their joint and marginal conditional moment generating functions are derived. Based on these exact conditional moments, bias-reduced estimators are proposed and their distributions are discussed. Confidence intervals of the model parameters based on exact and asymptotic distributions of the MLEs and bias-reduced estimators are developed. The performances of the point and interval estimation procedures are evaluated and compared through exact calculations and Monte Carlo simulations. Recommendations are made based on these results and an illustrative example is presented.
PubDate: 2015-01-03
- Abstract: Abstract
Hybrid censoring schemes are commonly used in life-testing experiments to reduce the experimental time and the cost. A Type-II progressive hybrid censoring scheme (PHCS) was introduced by Kundu and Joarder (Comput Stat Data Anal 50:2509–2528, 2006) that combines progressive Type-II censoring and Type-I censoring. In this paper, we consider the statistical inference of a two-parameter exponential distribution under the Type-II PHCS. The conditional maximum likelihood estimates (MLEs) of the model parameters and their joint and marginal conditional moment generating functions are derived. Based on these exact conditional moments, bias-reduced estimators are proposed and their distributions are discussed. Confidence intervals of the model parameters based on exact and asymptotic distributions of the MLEs and bias-reduced estimators are developed. The performances of the point and interval estimation procedures are evaluated and compared through exact calculations and Monte Carlo simulations. Recommendations are made based on these results and an illustrative example is presented.
- Robust spline-based variable selection in varying coefficient model
- Abstract: Abstract
The varying coefficient model is widely used as an extension of the linear regression model. Many procedures have been developed for the model estimation, and recently efficient variable selection procedures for the varying coefficient model have been proposed as well. However, those variable selection approaches are mainly built on the least-squares (LS) type method. Although the LS method is a successful and standard choice in the varying coefficient model fitting and variable selection, it may suffer when the errors follow a heavy-tailed distribution or in the presence of outliers. To overcome this issue, we start by developing a novel robust estimator, termed rank-based spline estimator, which combines the ideas of rank inference and polynomial spline. Furthermore, we propose a robust variable selection method, incorporating the smoothly clipped absolute deviation penalty into the rank-based spline loss function. Under mild conditions, we theoretically show that the proposed rank-based spline estimator is highly efficient across a wide spectrum of distributions. Its asymptotic relative efficiency with respect to the LS-based method is closely related to that of the signed-rank Wilcoxon test with respect to the t test. Moreover, the proposed variable selection method can identify the true model consistently, and the resulting estimator can be as efficient as the oracle estimator. Simulation studies show that our procedure has better performance than the LS-based method when the errors deviate from normality.
PubDate: 2015-01-01
- Abstract: Abstract
The varying coefficient model is widely used as an extension of the linear regression model. Many procedures have been developed for the model estimation, and recently efficient variable selection procedures for the varying coefficient model have been proposed as well. However, those variable selection approaches are mainly built on the least-squares (LS) type method. Although the LS method is a successful and standard choice in the varying coefficient model fitting and variable selection, it may suffer when the errors follow a heavy-tailed distribution or in the presence of outliers. To overcome this issue, we start by developing a novel robust estimator, termed rank-based spline estimator, which combines the ideas of rank inference and polynomial spline. Furthermore, we propose a robust variable selection method, incorporating the smoothly clipped absolute deviation penalty into the rank-based spline loss function. Under mild conditions, we theoretically show that the proposed rank-based spline estimator is highly efficient across a wide spectrum of distributions. Its asymptotic relative efficiency with respect to the LS-based method is closely related to that of the signed-rank Wilcoxon test with respect to the t test. Moreover, the proposed variable selection method can identify the true model consistently, and the resulting estimator can be as efficient as the oracle estimator. Simulation studies show that our procedure has better performance than the LS-based method when the errors deviate from normality.
- Data transformations and goodness-of-fit tests for type-II right censored
samples- Abstract: Abstract
We suggest several goodness-of-fit (GOF) methods which are appropriate with Type-II right censored data. Our strategy is to transform the original observations from a censored sample into an approximately i.i.d. sample of normal variates and then perform a standard GOF test for normality on the transformed observations. A simulation study with several well known parametric distributions under testing reveals the sampling properties of the methods. We also provide theoretical analysis of the proposed method.
PubDate: 2015-01-01
- Abstract: Abstract
We suggest several goodness-of-fit (GOF) methods which are appropriate with Type-II right censored data. Our strategy is to transform the original observations from a censored sample into an approximately i.i.d. sample of normal variates and then perform a standard GOF test for normality on the transformed observations. A simulation study with several well known parametric distributions under testing reveals the sampling properties of the methods. We also provide theoretical analysis of the proposed method.
- A Darling–Erdős-type CUSUM-procedure for functional data
- Abstract: Abstract
The focus of the paper is nonparametric detection of changes in the mean of
\(m\)
-dependent stationary functional data via a cumulative sum (CUSUM) procedure. We consider a projection-based quasi-maximum likelihood CUSUM-procedure which relies on a Darling–Erdős-type limit theorem. Under mild moment assumptions we investigate the asymptotic properties under the null hypothesis and show consistency under the alternatives of either an abrupt or a gradual change in the mean. The finite sample behavior is illustrated in a small simulation study including an application to temperature data from Hohenpeißenberg (Bavaria, Germany).
PubDate: 2015-01-01
- Abstract: Abstract
The focus of the paper is nonparametric detection of changes in the mean of
\(m\)
-dependent stationary functional data via a cumulative sum (CUSUM) procedure. We consider a projection-based quasi-maximum likelihood CUSUM-procedure which relies on a Darling–Erdős-type limit theorem. Under mild moment assumptions we investigate the asymptotic properties under the null hypothesis and show consistency under the alternatives of either an abrupt or a gradual change in the mean. The finite sample behavior is illustrated in a small simulation study including an application to temperature data from Hohenpeißenberg (Bavaria, Germany).
- On shrinkage estimators in matrix variate elliptical models
- Abstract: Abstract
This paper derives the risk functions of a class of shrinkage estimators for the mean parameter matrix of a matrix variate elliptically contoured distribution. It is showed that the positive rule shrinkage estimator outperformed the shrinkage and unrestricted (maximum likelihood) estimators. To illustrate the findings of the paper, the relative risk functions for different degrees of freedoms are given for a multivariate t distribution. Shrinkage estimators for the matrix variate regression model under matrix normal, matrix t or Pearson VII error distributions would be special cases of this paper.
PubDate: 2015-01-01
- Abstract: Abstract
This paper derives the risk functions of a class of shrinkage estimators for the mean parameter matrix of a matrix variate elliptically contoured distribution. It is showed that the positive rule shrinkage estimator outperformed the shrinkage and unrestricted (maximum likelihood) estimators. To illustrate the findings of the paper, the relative risk functions for different degrees of freedoms are given for a multivariate t distribution. Shrinkage estimators for the matrix variate regression model under matrix normal, matrix t or Pearson VII error distributions would be special cases of this paper.
- Construction of nearly orthogonal Latin hypercube designs
- Abstract: Abstract
The Latin hypercube design (LHD) is a popular choice of experimental design when computer simulation is used to study a physical process. In this paper, we propose some methods for constructing nearly orthogonal Latin hypercube designs (NOLHDs) with 2, 4, 8, 12, 16, 20 and 24 factors having flexible run sizes. These designs can be very useful when orthogonal Latin hypercube designs (OLHDs) of the needed sizes do not exist.
PubDate: 2015-01-01
- Abstract: Abstract
The Latin hypercube design (LHD) is a popular choice of experimental design when computer simulation is used to study a physical process. In this paper, we propose some methods for constructing nearly orthogonal Latin hypercube designs (NOLHDs) with 2, 4, 8, 12, 16, 20 and 24 factors having flexible run sizes. These designs can be very useful when orthogonal Latin hypercube designs (OLHDs) of the needed sizes do not exist.
- Improving the EBLUPs of balanced mixed-effects models
- Abstract: Abstract
Lately mixed models are heavily employed in analyses of promotional tactics as well as in clinical research. The Best Linear Unbiased Predictor (BLUP) in mixed models is a function of the variance components, which are typically estimated using conventional MLE based methods. It is well known that such approaches frequently yield estimates of factor variances that are either zero or negative. In such situations, ML and REML either do not provide any EBLUPs, or they all become practically equal, a highly undesirable repercussion. In this article we propose a class of estimators that do not suffer from the negative variance problem, and we do so while improving upon existing estimators. The MSE superiority of the resulting EBLUPs is illustrated by a simulation study. In our derivation, we also introduce a Lemma, which can be considered as the converse of Stein’s Lemma.
PubDate: 2014-12-25
- Abstract: Abstract
Lately mixed models are heavily employed in analyses of promotional tactics as well as in clinical research. The Best Linear Unbiased Predictor (BLUP) in mixed models is a function of the variance components, which are typically estimated using conventional MLE based methods. It is well known that such approaches frequently yield estimates of factor variances that are either zero or negative. In such situations, ML and REML either do not provide any EBLUPs, or they all become practically equal, a highly undesirable repercussion. In this article we propose a class of estimators that do not suffer from the negative variance problem, and we do so while improving upon existing estimators. The MSE superiority of the resulting EBLUPs is illustrated by a simulation study. In our derivation, we also introduce a Lemma, which can be considered as the converse of Stein’s Lemma.
- Testing structural changes in panel data with small fixed panel size and
bootstrap- Abstract: Abstract
Panel data of our interest consist of a moderate or relatively large number of panels, while the panels contain a small number of observations. This paper establishes testing procedures to detect a possible common change in means of the panels. To this end, we consider a ratio type test statistic and derive its asymptotic distribution under the no change null hypothesis. Moreover, we prove the consistency of the test under the alternative. The main advantage of such an approach is that the variance of the observations neither has to be known nor estimated. On the other hand, the correlation structure is required to be calculated. To overcome this issue, a bootstrap technique is proposed in the way of a completely data driven approach without any tuning parameters. The validity of the bootstrap algorithm is shown. As a by-product of the developed tests, we introduce a common break point estimate and prove its consistency. The results are illustrated through a simulation study. An application of the procedure to actuarial data is presented.
PubDate: 2014-12-21
- Abstract: Abstract
Panel data of our interest consist of a moderate or relatively large number of panels, while the panels contain a small number of observations. This paper establishes testing procedures to detect a possible common change in means of the panels. To this end, we consider a ratio type test statistic and derive its asymptotic distribution under the no change null hypothesis. Moreover, we prove the consistency of the test under the alternative. The main advantage of such an approach is that the variance of the observations neither has to be known nor estimated. On the other hand, the correlation structure is required to be calculated. To overcome this issue, a bootstrap technique is proposed in the way of a completely data driven approach without any tuning parameters. The validity of the bootstrap algorithm is shown. As a by-product of the developed tests, we introduce a common break point estimate and prove its consistency. The results are illustrated through a simulation study. An application of the procedure to actuarial data is presented.