Metrika [SJR: 0.943] [H-I: 25] [3 followers] Follow Hybrid journal (It can contain Open Access articles) ISSN (Print) 1435-926X - ISSN (Online) 0026-1335 Published by Springer-Verlag [2280 journals] |
- A dynamic stress–strength model with stochastically decreasing
strength- Abstract: Abstract
We consider a dynamic stress–strength model under external shocks. The strength of the system decreases with time and the failure occurs when the strength finally vanishes. Furthermore, there is another cause of the system failure induced by an external shock process. Each shock is characterized by the corresponding stress. If the magnitude of the stress exceeds the current strength, then the system also fails. We assume that the initial strength of the system and its decreasing drift pattern are random. We derive the survival function of the system and interpret the time-dependent dynamic changes of the random quantities which govern the reliability performance of the system.
PubDate: 2015-10-01
- Abstract: Abstract
We consider a dynamic stress–strength model under external shocks. The strength of the system decreases with time and the failure occurs when the strength finally vanishes. Furthermore, there is another cause of the system failure induced by an external shock process. Each shock is characterized by the corresponding stress. If the magnitude of the stress exceeds the current strength, then the system also fails. We assume that the initial strength of the system and its decreasing drift pattern are random. We derive the survival function of the system and interpret the time-dependent dynamic changes of the random quantities which govern the reliability performance of the system.
- Inference for the bivariate Birnbaum–Saunders lifetime regression
model and associated inference- Abstract: Abstract
In this paper, we discuss a regression model based on the bivariate Birnbaum–Saunders distribution. We derive the maximum likelihood estimates of the model parameters and then develop associated inference. Next, we briefly describe likelihood-ratio tests for some hypotheses of interest as well as some interval estimation methods. Monte Carlo simulations are then carried out to examine the performance of the estimators as well as the interval estimation methods. Finally, a numerical data analysis is performed for illustrating all the inferential methods developed here.
PubDate: 2015-10-01
- Abstract: Abstract
In this paper, we discuss a regression model based on the bivariate Birnbaum–Saunders distribution. We derive the maximum likelihood estimates of the model parameters and then develop associated inference. Next, we briefly describe likelihood-ratio tests for some hypotheses of interest as well as some interval estimation methods. Monte Carlo simulations are then carried out to examine the performance of the estimators as well as the interval estimation methods. Finally, a numerical data analysis is performed for illustrating all the inferential methods developed here.
- A Poisson INAR(1) model with serially dependent innovations
- Abstract: Abstract
Motivated by a certain type of infinite-patch metapopulation model, we propose an extension to the popular Poisson INAR(1) model, where the innovations are assumed to be serially dependent in such a way that their mean is increased if the current population is large. We shall recognize that this new model forms a bridge between the Poisson INAR(1) model and the INARCH(1) model. We analyze the stochastic properties of the observations and innovations from an extended Poisson INAR(1) process, and we consider the problem of model identification and parameter estimation. A real-data example about iceberg counts shows how to benefit from the new model.
PubDate: 2015-10-01
- Abstract: Abstract
Motivated by a certain type of infinite-patch metapopulation model, we propose an extension to the popular Poisson INAR(1) model, where the innovations are assumed to be serially dependent in such a way that their mean is increased if the current population is large. We shall recognize that this new model forms a bridge between the Poisson INAR(1) model and the INARCH(1) model. We analyze the stochastic properties of the observations and innovations from an extended Poisson INAR(1) process, and we consider the problem of model identification and parameter estimation. A real-data example about iceberg counts shows how to benefit from the new model.
- Fisher information in censored samples from folded and unfolded
populations- Abstract: Abstract
Fisher information (FI) forms the backbone for many parametric inferential procedures and provides a useful metric for the design of experiments. The purpose of this paper is to suggest an easy way to compute the FI in censored samples from an unfolded symmetric distribution and its folded version with minimal computation that involves only the expectations of functions of order statistics from the folded distribution. In particular we obtain expressions for the FI in a single order statistic and in Type-II censored samples from an unfolded distribution and the associated folded distribution. We illustrate our results by computing the FI on the scale parameter in censored samples from a Laplace (double exponential) distribution in terms of the expectations of special functions of order statistics from exponential samples. We discuss the limiting forms and illustrate applications of our results.
PubDate: 2015-10-01
- Abstract: Abstract
Fisher information (FI) forms the backbone for many parametric inferential procedures and provides a useful metric for the design of experiments. The purpose of this paper is to suggest an easy way to compute the FI in censored samples from an unfolded symmetric distribution and its folded version with minimal computation that involves only the expectations of functions of order statistics from the folded distribution. In particular we obtain expressions for the FI in a single order statistic and in Type-II censored samples from an unfolded distribution and the associated folded distribution. We illustrate our results by computing the FI on the scale parameter in censored samples from a Laplace (double exponential) distribution in terms of the expectations of special functions of order statistics from exponential samples. We discuss the limiting forms and illustrate applications of our results.
- Tests in variance components models under skew-normal settings
- Abstract: Abstract
The hypothesis testing problems of unknown parameters for the variance components model with skew-normal random errors are discussed. Several properties of the model, such as the density function, moment generating function, and independence conditions, are obtained. A new version of Cochran’s theorem is given, which is used to establish exact tests for fixed effects and variance components of the model. For illustration, our main results are applied to two examples and a real data problem. Finally, some simulation results on the type I error probability and power of the proposed test are reported. And the simulation results indicate that the proposed test provides satisfactory performance on the type I error probability and power.
PubDate: 2015-10-01
- Abstract: Abstract
The hypothesis testing problems of unknown parameters for the variance components model with skew-normal random errors are discussed. Several properties of the model, such as the density function, moment generating function, and independence conditions, are obtained. A new version of Cochran’s theorem is given, which is used to establish exact tests for fixed effects and variance components of the model. For illustration, our main results are applied to two examples and a real data problem. Finally, some simulation results on the type I error probability and power of the proposed test are reported. And the simulation results indicate that the proposed test provides satisfactory performance on the type I error probability and power.
- One-sample Bayesian prediction intervals based on progressively type-II
censored data from the half-logistic distribution under progressive stress
model- Abstract: Abstract
Based on progressively type-II censored sample, we discuss Bayesian interval prediction under progressive stress accelerated life tests. The lifetime of a unit under use condition stress is assumed to follow the half-logistic distribution with a scale parameter satisfying the inverse power law. Prediction bounds of future order statistics are obtained. A simulation study is performed and numerical computations are carried out, based on two different progressive censoring schemes. The coverage probabilities and average interval lengths of the confidence intervals are computed via a Monte Carlo simulation.
PubDate: 2015-10-01
- Abstract: Abstract
Based on progressively type-II censored sample, we discuss Bayesian interval prediction under progressive stress accelerated life tests. The lifetime of a unit under use condition stress is assumed to follow the half-logistic distribution with a scale parameter satisfying the inverse power law. Prediction bounds of future order statistics are obtained. A simulation study is performed and numerical computations are carried out, based on two different progressive censoring schemes. The coverage probabilities and average interval lengths of the confidence intervals are computed via a Monte Carlo simulation.
- Generalized variable resolution designs
- Abstract: Abstract
In this paper, the concept of generalized variable resolution is proposed for designs with nonnegligible interactions between groups. The conditions for the existence of generalized variable resolution designs are discussed. Connections between different generalized variable resolution designs and compromise plans, clear compromise plans and designs containing partially clear two-factor interactions are explored. A general construction method for the proposed designs is also discussed.
PubDate: 2015-10-01
- Abstract: Abstract
In this paper, the concept of generalized variable resolution is proposed for designs with nonnegligible interactions between groups. The conditions for the existence of generalized variable resolution designs are discussed. Connections between different generalized variable resolution designs and compromise plans, clear compromise plans and designs containing partially clear two-factor interactions are explored. A general construction method for the proposed designs is also discussed.
- Erratum to: Testing structural changes in panel data with small fixed
panel size and bootstrap- PubDate: 2015-09-23
- PubDate: 2015-09-23
- Fourier-type estimation of the power GARCH model with stable-Paretian
innovations- Abstract: Abstract
We consider estimation for general power GARCH models under stable-Paretian innovations. Exploiting the simple structure of the conditional characteristic function of the observations driven by these models we propose minimum distance estimation based on the empirical characteristic function of corresponding residuals. Consistency of the estimators is proved, and the asymptotic distribution of the estimator is studied. Efficiency issues are explored and finite-sample results are presented as well as applications of the proposed procedures to real data from the financial markets. A multivariate extension is also considered.
PubDate: 2015-09-19
- Abstract: Abstract
We consider estimation for general power GARCH models under stable-Paretian innovations. Exploiting the simple structure of the conditional characteristic function of the observations driven by these models we propose minimum distance estimation based on the empirical characteristic function of corresponding residuals. Consistency of the estimators is proved, and the asymptotic distribution of the estimator is studied. Efficiency issues are explored and finite-sample results are presented as well as applications of the proposed procedures to real data from the financial markets. A multivariate extension is also considered.
- Minimum Hellinger distance estimation for bivariate samples and time
series with applications to nonlinear regression and copula-based models- Abstract: Abstract
We study minimum Hellinger distance estimation (MHDE) based on kernel density estimators for bivariate time series, such that various commonly used regression models and parametric time series such as nonlinear regressions with conditionally heteroscedastic errors and copula-based Markov processes, where copula densities are used to model the conditional densities, can be treated. It is shown that consistency and asymptotic normality of the MHDE basically follow from the uniform consistency of the density estimate and the validity of the central limit theorem for its integrated version. We also provide explicit sufficient conditions both for the i.i.d. case and the case of strong mixing series. In addition, for the case of i.i.d. data, we briefly discuss the asymptotics under local alternatives and relate the results to maximum likelihood estimation.
PubDate: 2015-09-19
- Abstract: Abstract
We study minimum Hellinger distance estimation (MHDE) based on kernel density estimators for bivariate time series, such that various commonly used regression models and parametric time series such as nonlinear regressions with conditionally heteroscedastic errors and copula-based Markov processes, where copula densities are used to model the conditional densities, can be treated. It is shown that consistency and asymptotic normality of the MHDE basically follow from the uniform consistency of the density estimate and the validity of the central limit theorem for its integrated version. We also provide explicit sufficient conditions both for the i.i.d. case and the case of strong mixing series. In addition, for the case of i.i.d. data, we briefly discuss the asymptotics under local alternatives and relate the results to maximum likelihood estimation.
- A von Mises approximation to the small sample distribution of the trimmed
mean- Abstract: Abstract
The small sample distribution of the trimmed mean is usually approximated by a Student’s t distribution. But this approximation is valid only when the observations come from a standard normal model and the sample size is not very small. Moreover, until now, there is only empirical justification for this approximation but no formal proof. Although there are some accurate saddlepoint approximations when the sample size is small and the distribution not normal, these are very difficult to apply and the elements involved in it, difficult to interpret. In this paper we propose a new approximation based on the von Mises expansion for the tail probability functional of the trimmed mean, which improves the usual Student’s t approximation in the normal case and which can be applied for other models. This new approximation allows, for instance, an objective choice of the trimming fraction in a context of hypothesis testing problem using the new tool of the p value line.
PubDate: 2015-08-29
- Abstract: Abstract
The small sample distribution of the trimmed mean is usually approximated by a Student’s t distribution. But this approximation is valid only when the observations come from a standard normal model and the sample size is not very small. Moreover, until now, there is only empirical justification for this approximation but no formal proof. Although there are some accurate saddlepoint approximations when the sample size is small and the distribution not normal, these are very difficult to apply and the elements involved in it, difficult to interpret. In this paper we propose a new approximation based on the von Mises expansion for the tail probability functional of the trimmed mean, which improves the usual Student’s t approximation in the normal case and which can be applied for other models. This new approximation allows, for instance, an objective choice of the trimming fraction in a context of hypothesis testing problem using the new tool of the p value line.
- Generalized waiting time distributions associated with runs
- Abstract: Abstract
Let
\(\left\{ X_{t},t\ge 1\right\} \)
be a sequence of random variables with two possible values as either “1” (success) or “0” (failure). Define an independent sequence of random variables
\(\left\{ D_{i},i\ge 1\right\} \)
. The random variable
\(D_{i}\)
is associated with the success when it occupies the ith place in a run of successes. We define the weight of a success run as the sum of the D values corresponding to the successes in the run. Define the following two random variables:
\(N_{k}\)
is the number of trials until the weight of a single success run exceeds or equals k, and
\(N_{r,k}\)
is the number of trials until the weight of each of r success runs equals or exceeds k in
\(\left\{ X_{t},t\ge 1\right\} \)
. Distributional properties of the waiting time random variables
\(N_{k}\)
and
\(N_{r,k}\)
are studied and illustrative examples are presented.
PubDate: 2015-08-07
- Abstract: Abstract
Let
\(\left\{ X_{t},t\ge 1\right\} \)
be a sequence of random variables with two possible values as either “1” (success) or “0” (failure). Define an independent sequence of random variables
\(\left\{ D_{i},i\ge 1\right\} \)
. The random variable
\(D_{i}\)
is associated with the success when it occupies the ith place in a run of successes. We define the weight of a success run as the sum of the D values corresponding to the successes in the run. Define the following two random variables:
\(N_{k}\)
is the number of trials until the weight of a single success run exceeds or equals k, and
\(N_{r,k}\)
is the number of trials until the weight of each of r success runs equals or exceeds k in
\(\left\{ X_{t},t\ge 1\right\} \)
. Distributional properties of the waiting time random variables
\(N_{k}\)
and
\(N_{r,k}\)
are studied and illustrative examples are presented.
- On cumulative residual (past) inaccuracy for truncated random variables
- Abstract: Abstract
To overcome the drawbacks of Shannon’s entropy, the concept of cumulative residual and past entropy has been proposed in the information theoretic literature. Furthermore, the Shannon entropy has been generalized in a number of different ways by many researchers. One important extension is Kerridge inaccuracy measure. In the present communication we study the cumulative residual and past inaccuracy measures, which are extensions of the corresponding cumulative entropies. Several properties, including monotonicity and bounds, are obtained for left, right and doubly truncated random variables.
PubDate: 2015-08-02
- Abstract: Abstract
To overcome the drawbacks of Shannon’s entropy, the concept of cumulative residual and past entropy has been proposed in the information theoretic literature. Furthermore, the Shannon entropy has been generalized in a number of different ways by many researchers. One important extension is Kerridge inaccuracy measure. In the present communication we study the cumulative residual and past inaccuracy measures, which are extensions of the corresponding cumulative entropies. Several properties, including monotonicity and bounds, are obtained for left, right and doubly truncated random variables.
- Improving the EBLUPs of balanced mixed-effects models
- Abstract: Abstract
Lately mixed models are heavily employed in analyses of promotional tactics as well as in clinical research. The Best Linear Unbiased Predictor (BLUP) in mixed models is a function of the variance components, which are typically estimated using conventional MLE based methods. It is well known that such approaches frequently yield estimates of factor variances that are either zero or negative. In such situations, ML and REML either do not provide any EBLUPs, or they all become practically equal, a highly undesirable repercussion. In this article we propose a class of estimators that do not suffer from the negative variance problem, and we do so while improving upon existing estimators. The MSE superiority of the resulting EBLUPs is illustrated by a simulation study. In our derivation, we also introduce a Lemma, which can be considered as the converse of Stein’s Lemma.
PubDate: 2015-08-01
- Abstract: Abstract
Lately mixed models are heavily employed in analyses of promotional tactics as well as in clinical research. The Best Linear Unbiased Predictor (BLUP) in mixed models is a function of the variance components, which are typically estimated using conventional MLE based methods. It is well known that such approaches frequently yield estimates of factor variances that are either zero or negative. In such situations, ML and REML either do not provide any EBLUPs, or they all become practically equal, a highly undesirable repercussion. In this article we propose a class of estimators that do not suffer from the negative variance problem, and we do so while improving upon existing estimators. The MSE superiority of the resulting EBLUPs is illustrated by a simulation study. In our derivation, we also introduce a Lemma, which can be considered as the converse of Stein’s Lemma.
- Smoothing spline regression estimation based on real and artificial data
- Abstract: Abstract
In this article we introduce a smoothing spline estimate for fixed design regression estimation based on real and artificial data, where the artificial data comes from previously undertaken similar experiments. The smoothing spline estimate gives different weights to the real and the artificial data. It is investigated under which conditions the rate of convergence of this estimate is better than the rate of convergence of the ordinary smoothing spline estimate applied to the real data only. The finite sample size performance of the estimate is analyzed using simulated data. The usefulness of the estimate is illustrated by applying it in the context of experimental fatigue tests.
PubDate: 2015-08-01
- Abstract: Abstract
In this article we introduce a smoothing spline estimate for fixed design regression estimation based on real and artificial data, where the artificial data comes from previously undertaken similar experiments. The smoothing spline estimate gives different weights to the real and the artificial data. It is investigated under which conditions the rate of convergence of this estimate is better than the rate of convergence of the ordinary smoothing spline estimate applied to the real data only. The finite sample size performance of the estimate is analyzed using simulated data. The usefulness of the estimate is illustrated by applying it in the context of experimental fatigue tests.
- Erratum to: Improving the EBLUPs of balanced mixed-effects models
- PubDate: 2015-08-01
- PubDate: 2015-08-01
- Universal surrogate likelihood functions for nonnegative continuous data
- Abstract: Abstract
In independent and identically distributed situations, we show that one can properly correct the Poisson and the negative binomial likelihood functions to become asymptotically identical to the profile likelihood function for the mean parameter of nonnegative continuous distributions under mild conditions. We present theoretical justifications and use data analyses to demonstrate the merit of our new robust likelihood method.
PubDate: 2015-08-01
- Abstract: Abstract
In independent and identically distributed situations, we show that one can properly correct the Poisson and the negative binomial likelihood functions to become asymptotically identical to the profile likelihood function for the mean parameter of nonnegative continuous distributions under mild conditions. We present theoretical justifications and use data analyses to demonstrate the merit of our new robust likelihood method.
- Bivariate distributions with conditionals satisfying the proportional
generalized odds rate model- Abstract: Abstract
New bivariate models are obtained with conditional distributions (in two different senses) satisfying the proportional generalized odds rate (PGOR) model. The PGOR semi-parametric model includes as particular cases the Cox proportional hazard rate (PHR) model and the proportional odds rate (POR) model. Thus the new bivariate models are very flexible and include, as particular cases, the bivariate extensions of PHR and POR models. Moreover, some well known parametric bivariate models are also included in these general models. The basic theoretical properties of the new models are obtained. An application to fit a real data set is also provided.
PubDate: 2015-08-01
- Abstract: Abstract
New bivariate models are obtained with conditional distributions (in two different senses) satisfying the proportional generalized odds rate (PGOR) model. The PGOR semi-parametric model includes as particular cases the Cox proportional hazard rate (PHR) model and the proportional odds rate (POR) model. Thus the new bivariate models are very flexible and include, as particular cases, the bivariate extensions of PHR and POR models. Moreover, some well known parametric bivariate models are also included in these general models. The basic theoretical properties of the new models are obtained. An application to fit a real data set is also provided.
- Testing structural changes in panel data with small fixed panel size and
bootstrap- Abstract: Abstract
Panel data of our interest consist of a moderate or relatively large number of panels, while the panels contain a small number of observations. This paper establishes testing procedures to detect a possible common change in means of the panels. To this end, we consider a ratio type test statistic and derive its asymptotic distribution under the no change null hypothesis. Moreover, we prove the consistency of the test under the alternative. The main advantage of such an approach is that the variance of the observations neither has to be known nor estimated. On the other hand, the correlation structure is required to be calculated. To overcome this issue, a bootstrap technique is proposed in the way of a completely data driven approach without any tuning parameters. The validity of the bootstrap algorithm is shown. As a by-product of the developed tests, we introduce a common break point estimate and prove its consistency. The results are illustrated through a simulation study. An application of the procedure to actuarial data is presented.
PubDate: 2015-08-01
- Abstract: Abstract
Panel data of our interest consist of a moderate or relatively large number of panels, while the panels contain a small number of observations. This paper establishes testing procedures to detect a possible common change in means of the panels. To this end, we consider a ratio type test statistic and derive its asymptotic distribution under the no change null hypothesis. Moreover, we prove the consistency of the test under the alternative. The main advantage of such an approach is that the variance of the observations neither has to be known nor estimated. On the other hand, the correlation structure is required to be calculated. To overcome this issue, a bootstrap technique is proposed in the way of a completely data driven approach without any tuning parameters. The validity of the bootstrap algorithm is shown. As a by-product of the developed tests, we introduce a common break point estimate and prove its consistency. The results are illustrated through a simulation study. An application of the procedure to actuarial data is presented.
- Exact likelihood inference for the two-parameter exponential distribution
under Type-II progressively hybrid censoring- Abstract: Abstract
Hybrid censoring schemes are commonly used in life-testing experiments to reduce the experimental time and the cost. A Type-II progressive hybrid censoring scheme (PHCS) was introduced by Kundu and Joarder (Comput Stat Data Anal 50:2509–2528, 2006) that combines progressive Type-II censoring and Type-I censoring. In this paper, we consider the statistical inference of a two-parameter exponential distribution under the Type-II PHCS. The conditional maximum likelihood estimates (MLEs) of the model parameters and their joint and marginal conditional moment generating functions are derived. Based on these exact conditional moments, bias-reduced estimators are proposed and their distributions are discussed. Confidence intervals of the model parameters based on exact and asymptotic distributions of the MLEs and bias-reduced estimators are developed. The performances of the point and interval estimation procedures are evaluated and compared through exact calculations and Monte Carlo simulations. Recommendations are made based on these results and an illustrative example is presented.
PubDate: 2015-08-01
- Abstract: Abstract
Hybrid censoring schemes are commonly used in life-testing experiments to reduce the experimental time and the cost. A Type-II progressive hybrid censoring scheme (PHCS) was introduced by Kundu and Joarder (Comput Stat Data Anal 50:2509–2528, 2006) that combines progressive Type-II censoring and Type-I censoring. In this paper, we consider the statistical inference of a two-parameter exponential distribution under the Type-II PHCS. The conditional maximum likelihood estimates (MLEs) of the model parameters and their joint and marginal conditional moment generating functions are derived. Based on these exact conditional moments, bias-reduced estimators are proposed and their distributions are discussed. Confidence intervals of the model parameters based on exact and asymptotic distributions of the MLEs and bias-reduced estimators are developed. The performances of the point and interval estimation procedures are evaluated and compared through exact calculations and Monte Carlo simulations. Recommendations are made based on these results and an illustrative example is presented.