Metrika [3 followers] Follow Hybrid journal (It can contain Open Access articles) ISSN (Print) 1435-926X - ISSN (Online) 0026-1335 Published by Springer-Verlag [2210 journals] [SJR: 0.839] [H-I: 22] |
- Universal surrogate likelihood functions for nonnegative continuous data
- Abstract: Abstract
In independent and identically distributed situations, we show that one can properly correct the Poisson and the negative binomial likelihood functions to become asymptotically identical to the profile likelihood function for the mean parameter of nonnegative continuous distributions under mild conditions. We present theoretical justifications and use data analyses to demonstrate the merit of our new robust likelihood method.
PubDate: 2014-12-18
- Abstract: Abstract
In independent and identically distributed situations, we show that one can properly correct the Poisson and the negative binomial likelihood functions to become asymptotically identical to the profile likelihood function for the mean parameter of nonnegative continuous distributions under mild conditions. We present theoretical justifications and use data analyses to demonstrate the merit of our new robust likelihood method.
- Robust tests for the equality of two normal means based on the density
power divergence- Abstract: Abstract
Statistical techniques are used in all branches of science to determine the feasibility of quantitative hypotheses. One of the most basic applications of statistical techniques in comparative analysis is the test of equality of two population means, generally performed under the assumption of normality. In medical studies, for example, we often need to compare the effects of two different drugs, treatments or preconditions on the resulting outcome. The most commonly used test in this connection is the two sample
\(t\)
test for the equality of means, performed under the assumption of equality of variances. It is a very useful tool, which is widely used by practitioners of all disciplines and has many optimality properties under the model. However, the test has one major drawback; it is highly sensitive to deviations from the ideal conditions, and may perform miserably under model misspecification and the presence of outliers. In this paper we present a robust test for the two sample hypothesis based on the density power divergence measure (Basu et al. in Biometrika 85(3):549–559, 1998), and show that it can be a great alternative to the ordinary two sample
\(t\)
test. The asymptotic properties of the proposed tests are rigorously established in the paper, and their performances are explored through simulations and real data analysis.
PubDate: 2014-12-02
- Abstract: Abstract
Statistical techniques are used in all branches of science to determine the feasibility of quantitative hypotheses. One of the most basic applications of statistical techniques in comparative analysis is the test of equality of two population means, generally performed under the assumption of normality. In medical studies, for example, we often need to compare the effects of two different drugs, treatments or preconditions on the resulting outcome. The most commonly used test in this connection is the two sample
\(t\)
test for the equality of means, performed under the assumption of equality of variances. It is a very useful tool, which is widely used by practitioners of all disciplines and has many optimality properties under the model. However, the test has one major drawback; it is highly sensitive to deviations from the ideal conditions, and may perform miserably under model misspecification and the presence of outliers. In this paper we present a robust test for the two sample hypothesis based on the density power divergence measure (Basu et al. in Biometrika 85(3):549–559, 1998), and show that it can be a great alternative to the ordinary two sample
\(t\)
test. The asymptotic properties of the proposed tests are rigorously established in the paper, and their performances are explored through simulations and real data analysis.
- On estimating the tail index and the spectral measure of multivariate
$$\alpha $$ α -stable distributions- Abstract: Abstract
We propose estimators for the tail index and the spectral measure of multivariate
\(\alpha \)
-stable distributions and derive their asymptotic properties. Simulation studies reveal the appropriateness of the estimators. Applications to financial data are also considered.
PubDate: 2014-11-21
- Abstract: Abstract
We propose estimators for the tail index and the spectral measure of multivariate
\(\alpha \)
-stable distributions and derive their asymptotic properties. Simulation studies reveal the appropriateness of the estimators. Applications to financial data are also considered.
- Bahadur representations for bootstrap quantiles
- Abstract: Abstract
A bootstrap sample may contain more than one replica of original data points. To extend the classical Bahadur type representations for the sample quantiles in the independent identical distributed case to bootstrap sample quantiles therefore is not a trivial task. This manuscript fulfils the task and establishes the asymptotic theory of bootstrap sample quantiles.
PubDate: 2014-11-06
- Abstract: Abstract
A bootstrap sample may contain more than one replica of original data points. To extend the classical Bahadur type representations for the sample quantiles in the independent identical distributed case to bootstrap sample quantiles therefore is not a trivial task. This manuscript fulfils the task and establishes the asymptotic theory of bootstrap sample quantiles.
- A necessary and sufficient condition for justifying non-parametric
likelihood with censored data- Abstract: Abstract
The non-parametric likelihood L(F) for censored data, including univariate or multivariate right-censored, doubly-censored, interval-censored, or masked competing risks data, is proposed by Peto (Appl Stat 22:86–91, 1973). It does not involve censoring distributions. In the literature, several noninformative conditions are proposed to justify L(F) so that the GMLE can be consistent (see, for examples, Self and Grossman in Biometrics 42:521–530 1986, or Oller et al. in Can J Stat 32:315–326, 2004). We present the necessary and sufficient (N&S) condition so that
\(L(F)\)
is equivalent to the full likelihood under the non-parametric set-up. The statement is false under the parametric set-up. Our condition is slightly different from the noninformative conditions in the literature. We present two applications to our cancer research data that satisfy the N&S condition but has dependent censoring.
PubDate: 2014-11-01
- Abstract: Abstract
The non-parametric likelihood L(F) for censored data, including univariate or multivariate right-censored, doubly-censored, interval-censored, or masked competing risks data, is proposed by Peto (Appl Stat 22:86–91, 1973). It does not involve censoring distributions. In the literature, several noninformative conditions are proposed to justify L(F) so that the GMLE can be consistent (see, for examples, Self and Grossman in Biometrics 42:521–530 1986, or Oller et al. in Can J Stat 32:315–326, 2004). We present the necessary and sufficient (N&S) condition so that
\(L(F)\)
is equivalent to the full likelihood under the non-parametric set-up. The statement is false under the parametric set-up. Our condition is slightly different from the noninformative conditions in the literature. We present two applications to our cancer research data that satisfy the N&S condition but has dependent censoring.
- U-type and column-orthogonal designs for computer experiments
- Abstract: Abstract
U-type designs and orthogonal Latin hypercube designs (OLHDs) have been used extensively for performing computer experiments. Both have good spaced filling properties in one-dimension. U-type designs may not have low correlations among the main effects, quadratic effects and two-factor interactions. On the other hand, OLHDs are hard to be found due to their large number of levels for each factor. Recently, alternative classes of U-type designs with zero or low correlations among the effects of interest appear in the literature. In this paper, we present new classes of U-type or quantitative
\(3\)
-orthogonal designs for computer experiments. The proposed designs are constructed by combining known combinatorial structures and they have their main effects pairwise orthogonal, orthogonal to the mean effect, and orthogonal to both quadratic effects and two-factor interactions.
PubDate: 2014-11-01
- Abstract: Abstract
U-type designs and orthogonal Latin hypercube designs (OLHDs) have been used extensively for performing computer experiments. Both have good spaced filling properties in one-dimension. U-type designs may not have low correlations among the main effects, quadratic effects and two-factor interactions. On the other hand, OLHDs are hard to be found due to their large number of levels for each factor. Recently, alternative classes of U-type designs with zero or low correlations among the effects of interest appear in the literature. In this paper, we present new classes of U-type or quantitative
\(3\)
-orthogonal designs for computer experiments. The proposed designs are constructed by combining known combinatorial structures and they have their main effects pairwise orthogonal, orthogonal to the mean effect, and orthogonal to both quadratic effects and two-factor interactions.
- Asymptotic behavior of the hazard rate in systems based on sequential
order statistics- Abstract: Abstract
The limiting behavior of the hazard rate of coherent systems based on sequential order statistics is examined. Related results for the survival function of the system lifetime are also considered. For deriving the results, properties of limits involving a relevation transform are studied in detail. Then, limits of characteristics in sequential
\(k\)
-out-of-
\(n\)
systems and general coherent systems with failure-dependent components are obtained. Applications to the comparison of different systems based on their long run behavior and to limits of coefficients in a signature-based representation of the residual system lifetime are given.
PubDate: 2014-11-01
- Abstract: Abstract
The limiting behavior of the hazard rate of coherent systems based on sequential order statistics is examined. Related results for the survival function of the system lifetime are also considered. For deriving the results, properties of limits involving a relevation transform are studied in detail. Then, limits of characteristics in sequential
\(k\)
-out-of-
\(n\)
systems and general coherent systems with failure-dependent components are obtained. Applications to the comparison of different systems based on their long run behavior and to limits of coefficients in a signature-based representation of the residual system lifetime are given.
- Bayesian prediction in doubly stochastic Poisson process
- Abstract: Abstract
A stochastic marked point process model based on doubly stochastic Poisson process is considered in the problem of prediction for the total size of future marks in a given period, given the history of the process. The underlying marked point process
\((T_{i},Y_{i})_{i\ge 1}\)
, where
\(T_{i}\)
is the time of occurrence of the
\(i\)
th event and the mark
\(Y_{i}\)
is its characteristic (size), is supposed to be a non-homogeneous Poisson process on
\(\mathbb {R}_{+}^{2}\)
with intensity measure
\(P\times \varTheta \)
, where
\(P\)
is known, whereas
\(\varTheta \)
is treated as an unknown measure of the total size of future marks in a given period. In the problem of prediction considered, a Bayesian approach is used assuming that
\(\varTheta \)
is random with prior distribution presented by a gamma process. The best predictor with respect to this prior distribution is constructed under a precautionary loss function. A simulation study for comparing the behavior of the predictors under various criteria is provided.
PubDate: 2014-11-01
- Abstract: Abstract
A stochastic marked point process model based on doubly stochastic Poisson process is considered in the problem of prediction for the total size of future marks in a given period, given the history of the process. The underlying marked point process
\((T_{i},Y_{i})_{i\ge 1}\)
, where
\(T_{i}\)
is the time of occurrence of the
\(i\)
th event and the mark
\(Y_{i}\)
is its characteristic (size), is supposed to be a non-homogeneous Poisson process on
\(\mathbb {R}_{+}^{2}\)
with intensity measure
\(P\times \varTheta \)
, where
\(P\)
is known, whereas
\(\varTheta \)
is treated as an unknown measure of the total size of future marks in a given period. In the problem of prediction considered, a Bayesian approach is used assuming that
\(\varTheta \)
is random with prior distribution presented by a gamma process. The best predictor with respect to this prior distribution is constructed under a precautionary loss function. A simulation study for comparing the behavior of the predictors under various criteria is provided.
- On extremes of bivariate residual lifetimes from generalized
Marshall–Olkin and time transformed exponential models- Abstract: Abstract
We study here extremes of residuals of the bivariate lifetime and the residual of extremes of the two lifetimes. In the case of generalized Marshall–Olkin model and the total time transformed exponential model, we first present some sufficient conditions for the extremes of residuals to be stochastically larger than the residual of the corresponding extremes, and then investigate the stochastic order of the residual of extremes of the two lifetimes based on the majorization of the age vector of the residuals.
PubDate: 2014-11-01
- Abstract: Abstract
We study here extremes of residuals of the bivariate lifetime and the residual of extremes of the two lifetimes. In the case of generalized Marshall–Olkin model and the total time transformed exponential model, we first present some sufficient conditions for the extremes of residuals to be stochastically larger than the residual of the corresponding extremes, and then investigate the stochastic order of the residual of extremes of the two lifetimes based on the majorization of the age vector of the residuals.
- Model averaging based on James–Stein estimators
- Abstract: Abstract
Existing model averaging methods are generally based on ordinary least squares (OLS) estimators. However, it is well known that the James–Stein (JS) estimator dominates the OLS estimator under quadratic loss, provided that the dimension of coefficient is larger than two. Thus, we focus on model averaging based on JS estimators instead of OLS estimators. We develop a weight choice method and prove its asymptotic optimality. A simulation experiment shows promising results for the proposed model average estimator.
PubDate: 2014-11-01
- Abstract: Abstract
Existing model averaging methods are generally based on ordinary least squares (OLS) estimators. However, it is well known that the James–Stein (JS) estimator dominates the OLS estimator under quadratic loss, provided that the dimension of coefficient is larger than two. Thus, we focus on model averaging based on JS estimators instead of OLS estimators. We develop a weight choice method and prove its asymptotic optimality. A simulation experiment shows promising results for the proposed model average estimator.
- Classes of multiple decision functions strongly controlling FWER and FDR
- Abstract: Abstract
Two general classes of multiple decision functions, where each member of the first class strongly controls the family-wise error rate (FWER), while each member of the second class strongly controls the false discovery rate (FDR), are described. These classes offer the possibility that optimal multiple decision functions with respect to a pre-specified Type II error criterion, such as the missed discovery rate (MDR), could be found which control the FWER or FDR Type I error rates. The gain in MDR of the associated FDR-controlling procedure relative to the well-known Benjamini–Hochberg procedure is demonstrated via a modest simulation study with gamma-distributed component data. Such multiple decision functions may have the potential of being utilized in multiple testing, specifically in the analysis of high-dimensional data sets.
PubDate: 2014-10-30
- Abstract: Abstract
Two general classes of multiple decision functions, where each member of the first class strongly controls the family-wise error rate (FWER), while each member of the second class strongly controls the false discovery rate (FDR), are described. These classes offer the possibility that optimal multiple decision functions with respect to a pre-specified Type II error criterion, such as the missed discovery rate (MDR), could be found which control the FWER or FDR Type I error rates. The gain in MDR of the associated FDR-controlling procedure relative to the well-known Benjamini–Hochberg procedure is demonstrated via a modest simulation study with gamma-distributed component data. Such multiple decision functions may have the potential of being utilized in multiple testing, specifically in the analysis of high-dimensional data sets.
- Trimmed and winsorized semiparametric estimator for left-truncated and
right-censored regression models- Abstract: Abstract
For a linear regression model subject to left-truncation and right-censoring where the truncation and censoring points are known constants (or always observed if random), Karlsson and Laitila (Stat Probab Lett 78:2567–2571, 2008) proposed a semiparametric estimator which deals with left-truncation by trimming and right-censoring by ‘winsorizing’. The estimator was motivated by a zero moment condition where a transformed error term appears with trimmed and winsorized tails. This paper takes the semiparametric estimator further by deriving the asymptotic distribution that was not shown in Karlsson and Laitila (Stat Probab Lett 78:2567–2571, 2008) and discusses its implementation aspects in practice, albeit brief.
PubDate: 2014-10-28
- Abstract: Abstract
For a linear regression model subject to left-truncation and right-censoring where the truncation and censoring points are known constants (or always observed if random), Karlsson and Laitila (Stat Probab Lett 78:2567–2571, 2008) proposed a semiparametric estimator which deals with left-truncation by trimming and right-censoring by ‘winsorizing’. The estimator was motivated by a zero moment condition where a transformed error term appears with trimmed and winsorized tails. This paper takes the semiparametric estimator further by deriving the asymptotic distribution that was not shown in Karlsson and Laitila (Stat Probab Lett 78:2567–2571, 2008) and discusses its implementation aspects in practice, albeit brief.
- Blocked semifoldovers of two-level orthogonal designs
- Abstract: Abstract
Follow-up experimentation is often necessary to the successful use of fractional factorial designs. When some effects are believed to be significant but cannot be estimated using an initial design, adding another fraction is often recommended. As the initial design and its foldover (or semifoldover) are usually conducted at different stages, it may be desirable to include a block factor. In this article, we study the blocking effect of such a factor on foldover and semifoldover designs. We consider two general cases for the initial designs, which can be either unblocked or blocked designs. In both cases, we explore the relationships between semifoldover of a design and its corresponding foldover design. More specifically, we obtain some theoretical results on when a semifoldover design can estimate the same two-factor interactions or main effects as the corresponding foldover. These results can be important for those who want to take advantage of the run size savings of a semifoldover without sacrificing the ability to estimate important effects.
PubDate: 2014-10-22
- Abstract: Abstract
Follow-up experimentation is often necessary to the successful use of fractional factorial designs. When some effects are believed to be significant but cannot be estimated using an initial design, adding another fraction is often recommended. As the initial design and its foldover (or semifoldover) are usually conducted at different stages, it may be desirable to include a block factor. In this article, we study the blocking effect of such a factor on foldover and semifoldover designs. We consider two general cases for the initial designs, which can be either unblocked or blocked designs. In both cases, we explore the relationships between semifoldover of a design and its corresponding foldover design. More specifically, we obtain some theoretical results on when a semifoldover design can estimate the same two-factor interactions or main effects as the corresponding foldover. These results can be important for those who want to take advantage of the run size savings of a semifoldover without sacrificing the ability to estimate important effects.
- Robust minimax Stein estimation under invariant data-based loss for
spherically and elliptically symmetric distributions- Abstract: Abstract
From an observable
\((X,U)\)
in
\(\mathbb R^p \times \mathbb R^k\)
, we consider estimation of an unknown location parameter
\(\theta \in \mathbb R^p\)
under two distributional settings: the density of
\((X,U)\)
is spherically symmetric with an unknown scale parameter
\(\sigma \)
and is ellipically symmetric with an unknown covariance matrix
\(\Sigma \)
. Evaluation of estimators of
\(\theta \)
is made under the classical invariant losses
\(\Vert d - \theta \Vert ^2 / \sigma ^2\)
and
\((d - \theta )^t \Sigma ^{-1} (d - \theta )\)
as well as two respective data based losses
\(\Vert d - \theta \Vert ^2 / \Vert U\Vert ^2\)
and
\((d - \theta )^t S^{-1} (d - \theta )\)
where
\(\Vert U\Vert ^2\)
estimates
\(\sigma ^2\)
while
\(S\)
estimates
\(\Sigma \)
. We provide new Stein and Stein–Haff identities that allow analysis of risk for these two new losses, including a new identity that gives rise to unbiased estimates of risk (up to a multiple of
\(1 / \sigma ^2\)
) in the spherical case for a larger class of estimators than in Fourdrinier et al. (J Multivar Anal 85:24–39, 2003). Minimax estimators of Baranchik form illustrate the theory. It is found that the range of shrinkage of these estimators is slightly larger for the data based losses compared to the usual invariant losses. It is also found that
\(X\)
is minimax with finite risk with respect to the data-based losses for many distributions for which its risk is infinite when calculated under the classical invariant losses. In these cases, including the multivariate
\(t\)
and, in particular, the multivariate Cauchy, we find improved shrinkage estimators as well.
PubDate: 2014-10-04
- Abstract: Abstract
From an observable
\((X,U)\)
in
\(\mathbb R^p \times \mathbb R^k\)
, we consider estimation of an unknown location parameter
\(\theta \in \mathbb R^p\)
under two distributional settings: the density of
\((X,U)\)
is spherically symmetric with an unknown scale parameter
\(\sigma \)
and is ellipically symmetric with an unknown covariance matrix
\(\Sigma \)
. Evaluation of estimators of
\(\theta \)
is made under the classical invariant losses
\(\Vert d - \theta \Vert ^2 / \sigma ^2\)
and
\((d - \theta )^t \Sigma ^{-1} (d - \theta )\)
as well as two respective data based losses
\(\Vert d - \theta \Vert ^2 / \Vert U\Vert ^2\)
and
\((d - \theta )^t S^{-1} (d - \theta )\)
where
\(\Vert U\Vert ^2\)
estimates
\(\sigma ^2\)
while
\(S\)
estimates
\(\Sigma \)
. We provide new Stein and Stein–Haff identities that allow analysis of risk for these two new losses, including a new identity that gives rise to unbiased estimates of risk (up to a multiple of
\(1 / \sigma ^2\)
) in the spherical case for a larger class of estimators than in Fourdrinier et al. (J Multivar Anal 85:24–39, 2003). Minimax estimators of Baranchik form illustrate the theory. It is found that the range of shrinkage of these estimators is slightly larger for the data based losses compared to the usual invariant losses. It is also found that
\(X\)
is minimax with finite risk with respect to the data-based losses for many distributions for which its risk is infinite when calculated under the classical invariant losses. In these cases, including the multivariate
\(t\)
and, in particular, the multivariate Cauchy, we find improved shrinkage estimators as well.
- Modified maximum spacings method for generalized extreme value
distribution and applications in real data analysis- Abstract: Abstract
This paper analyzes weekly closing price data of the S&P 500 stock index and electrical insulation element lifetimes data based on generalized extreme value distribution. A new estimation method, modified maximum spacings (MSP) method, is proposed and obtained by using interior penalty function algorithm. The standard error of the proposed method is calculated through Bootstrap method. The asymptotic properties of the modified MSP estimators are discussed. Some simulations are performed, which show that the proposed method is not only available for the whole shape parameter space, but is also of high efficiency. The benchmark risk index, value at risk (VaR), is evaluated according to the proposed method, and the confidence interval of VaR is also calculated through Bootstrap method. Finally, the results are compared with those derived by empirical calculation and some existing methods.
PubDate: 2014-10-01
- Abstract: Abstract
This paper analyzes weekly closing price data of the S&P 500 stock index and electrical insulation element lifetimes data based on generalized extreme value distribution. A new estimation method, modified maximum spacings (MSP) method, is proposed and obtained by using interior penalty function algorithm. The standard error of the proposed method is calculated through Bootstrap method. The asymptotic properties of the modified MSP estimators are discussed. Some simulations are performed, which show that the proposed method is not only available for the whole shape parameter space, but is also of high efficiency. The benchmark risk index, value at risk (VaR), is evaluated according to the proposed method, and the confidence interval of VaR is also calculated through Bootstrap method. Finally, the results are compared with those derived by empirical calculation and some existing methods.
- On sooner and later waiting time distributions associated with simple
patterns in a sequence of bivariate trials- Abstract: Abstract
In this article, we study sooner/later waiting time problems for simple patterns in a sequence of bivariate trials. The double generating functions of the sooner/later waiting times for the simple patterns are expressed in terms of the double generating functions of the numbers of occurrences of the simple patterns. Effective computational tools are developed for the evaluation of the waiting time distributions along with some examples. The results presented here provide perspectives on the waiting time problems arising from bivariate trials and extend a framework for studying the exact distributions of patterns. Finally, some examples are given in order to illustrate how our theoretical results are employed for the investigation of the waiting time problems for simple patterns.
PubDate: 2014-10-01
- Abstract: Abstract
In this article, we study sooner/later waiting time problems for simple patterns in a sequence of bivariate trials. The double generating functions of the sooner/later waiting times for the simple patterns are expressed in terms of the double generating functions of the numbers of occurrences of the simple patterns. Effective computational tools are developed for the evaluation of the waiting time distributions along with some examples. The results presented here provide perspectives on the waiting time problems arising from bivariate trials and extend a framework for studying the exact distributions of patterns. Finally, some examples are given in order to illustrate how our theoretical results are employed for the investigation of the waiting time problems for simple patterns.
- Asymptotic behaviour of near-maxima of Gaussian sequences
- Abstract: Abstract
Let
\((X_1,X_2,\ldots ,X_n)\)
be a Gaussian random vector with a common correlation coefficient
\(\rho _n,\,0\le \rho _n<1\)
, and let
\(M_n= \max (X_1,\ldots , X_n),\,n\ge 1\)
. For any given
\(a>0\)
, define
\(T_n(a)= \left\{ j,\,1\le j\le n,\,X_j\in (M_n-a,\,M_n]\right\} ,\,K_n(a)= \#T_n(a)\)
and
\(S_n(a)=\sum \nolimits _{j\in T_n(a)}X_j,\,n\ge 1\)
. In this paper, we obtain the limit distributions of
\((K_n(a))\)
and
\((S_n(a))\)
, under the assumption that
\(\rho _n\rightarrow \rho \)
as
\(n\rightarrow \infty ,\)
for some
\(\rho \in [0,1)\)
.
PubDate: 2014-10-01
- Abstract: Abstract
Let
\((X_1,X_2,\ldots ,X_n)\)
be a Gaussian random vector with a common correlation coefficient
\(\rho _n,\,0\le \rho _n<1\)
, and let
\(M_n= \max (X_1,\ldots , X_n),\,n\ge 1\)
. For any given
\(a>0\)
, define
\(T_n(a)= \left\{ j,\,1\le j\le n,\,X_j\in (M_n-a,\,M_n]\right\} ,\,K_n(a)= \#T_n(a)\)
and
\(S_n(a)=\sum \nolimits _{j\in T_n(a)}X_j,\,n\ge 1\)
. In this paper, we obtain the limit distributions of
\((K_n(a))\)
and
\((S_n(a))\)
, under the assumption that
\(\rho _n\rightarrow \rho \)
as
\(n\rightarrow \infty ,\)
for some
\(\rho \in [0,1)\)
.
- Empirical likelihood for high-dimensional linear regression models
- Abstract: Abstract
High-dimensional data are becoming prevalent, and many new methodologies and accompanying theories for high-dimensional data analysis have emerged in response. Empirical likelihood, as a classical nonparametric method of statistical inference, has proved to possess many good features. In this paper, our focus is to investigate the asymptotic behavior of empirical likelihood for regression coefficients in high-dimensional linear models. We give regularity conditions under which the standard normal calibration of empirical likelihood is valid in high dimensions. Both random and fixed designs are considered. Simulation studies are conducted to check the finite sample performance.
PubDate: 2014-10-01
- Abstract: Abstract
High-dimensional data are becoming prevalent, and many new methodologies and accompanying theories for high-dimensional data analysis have emerged in response. Empirical likelihood, as a classical nonparametric method of statistical inference, has proved to possess many good features. In this paper, our focus is to investigate the asymptotic behavior of empirical likelihood for regression coefficients in high-dimensional linear models. We give regularity conditions under which the standard normal calibration of empirical likelihood is valid in high dimensions. Both random and fixed designs are considered. Simulation studies are conducted to check the finite sample performance.
- Characterizations of bivariate distributions using concomitants of record
values- Abstract: Abstract
In this paper, we consider a family of bivariate distributions which is a generalization of the Morgenstern family of bivariate distributions. We have derived some properties of concomitants of record values which characterize this generalized class of distributions. The role of concomitants of record values in the unique determination of the parent bivariate distribution has been established. We have also derived properties of concomitants of record values which characterize each of the following families viz Morgenstern family, bivariate Pareto family and a generalized Gumbel’s family of bivariate distributions. Some applications of the characterization results are discussed and important conclusions based on the characterization results are drawn.
PubDate: 2014-10-01
- Abstract: Abstract
In this paper, we consider a family of bivariate distributions which is a generalization of the Morgenstern family of bivariate distributions. We have derived some properties of concomitants of record values which characterize this generalized class of distributions. The role of concomitants of record values in the unique determination of the parent bivariate distribution has been established. We have also derived properties of concomitants of record values which characterize each of the following families viz Morgenstern family, bivariate Pareto family and a generalized Gumbel’s family of bivariate distributions. Some applications of the characterization results are discussed and important conclusions based on the characterization results are drawn.
- Second order longitudinal dynamic models with covariates: estimation and
forecasting- Abstract: Abstract
In this paper, we propose an extension to the first-order branching process with immigration in the presence of fixed covariates and unobservable random effects. The extension permits the possibility that individuals from the second generation of the process may contribute to the total number of offsprings at time
\(t\)
by producing offsprings of their own. We will study the basic properties of the second order process and discuss a generalized quasilikelihood (GQL) estimation of the mean and variance parameters and the generalized method of moments estimation of the correlation parameters. We will discuss the asymptotic distribution of the GQL estimator by first deriving the influence curve of the estimator. For the fixed effects model we shall derive a forecasting function and the variance of the forecast error. The performance of the proposed estimators and forecasts will be examined through a simulation study.
PubDate: 2014-10-01
- Abstract: Abstract
In this paper, we propose an extension to the first-order branching process with immigration in the presence of fixed covariates and unobservable random effects. The extension permits the possibility that individuals from the second generation of the process may contribute to the total number of offsprings at time
\(t\)
by producing offsprings of their own. We will study the basic properties of the second order process and discuss a generalized quasilikelihood (GQL) estimation of the mean and variance parameters and the generalized method of moments estimation of the correlation parameters. We will discuss the asymptotic distribution of the GQL estimator by first deriving the influence curve of the estimator. For the fixed effects model we shall derive a forecasting function and the variance of the forecast error. The performance of the proposed estimators and forecasts will be examined through a simulation study.