全文获取类型
收费全文 | 1031篇 |
免费 | 31篇 |
国内免费 | 7篇 |
专业分类
管理学 | 70篇 |
人口学 | 9篇 |
丛书文集 | 22篇 |
理论方法论 | 13篇 |
综合类 | 284篇 |
社会学 | 12篇 |
统计学 | 659篇 |
出版年
2023年 | 8篇 |
2022年 | 13篇 |
2021年 | 10篇 |
2020年 | 33篇 |
2019年 | 47篇 |
2018年 | 54篇 |
2017年 | 64篇 |
2016年 | 40篇 |
2015年 | 35篇 |
2014年 | 58篇 |
2013年 | 178篇 |
2012年 | 93篇 |
2011年 | 42篇 |
2010年 | 29篇 |
2009年 | 35篇 |
2008年 | 29篇 |
2007年 | 42篇 |
2006年 | 27篇 |
2005年 | 23篇 |
2004年 | 32篇 |
2003年 | 22篇 |
2002年 | 22篇 |
2001年 | 22篇 |
2000年 | 17篇 |
1999年 | 19篇 |
1998年 | 16篇 |
1997年 | 11篇 |
1996年 | 8篇 |
1995年 | 8篇 |
1994年 | 4篇 |
1993年 | 6篇 |
1992年 | 6篇 |
1991年 | 4篇 |
1990年 | 2篇 |
1989年 | 3篇 |
1988年 | 1篇 |
1987年 | 1篇 |
1986年 | 1篇 |
1985年 | 2篇 |
1978年 | 1篇 |
1977年 | 1篇 |
排序方式: 共有1069条查询结果,搜索用时 125 毫秒
21.
Sumith Gunasekera 《统计学通讯:模拟与计算》2017,46(2):933-947
The Theil, Pietra, Éltetö and Frigyes measures of income inequality associated with the Pareto distribution function are expressed in terms of parameters defining the Pareto distribution. Inference procedures based on the generalized variable method, the large sample method, and the Bayesian method for testing of, and constructing confidence interval for, these measures are discussed. The results of Monte Carlo study are used to compare the performance of the suggested inference procedures from a population characterized by a Pareto distribution. 相似文献
22.
Jun Zhang 《统计学通讯:理论与方法》2017,46(24):12165-12193
We study partial linear single-index models (PLSiMs) when the response and the covariates in the parametric part are measured with additive distortion measurement errors. These distortions are modeled by unknown functions of a commonly observable confounding variable. We use the semiparametric profile least-squares method to estimate the parameters in the PLSiMs based on the residuals obtained from the distorted variables and confounding variable. We also employ the smoothly clipped absolute deviation penalty (SCAD) to select the relevant variables in the PLSiMs. We show that the resulting SCAD estimators are consistent and possess the oracle property. For the non parametric link function, we construct the simultaneous confidence bands and obtain the asymptotic distribution of the maximum absolute deviation between the estimated link function and the true link function. A simulation study is conducted to evaluate the performance of the proposed methods and a real dataset is analyzed for illustration. 相似文献
23.
Yang Yu Zhihong Zou Shanshan Wang 《Journal of Statistical Computation and Simulation》2019,89(17):3290-3312
This paper proposes the use of the Bernstein–Dirichlet process prior for a new nonparametric approach to estimating the link function in the single-index model (SIM). The Bernstein–Dirichlet process prior has so far mainly been used for nonparametric density estimation. Here we modify this approach to allow for an approximation of the unknown link function. Instead of the usual Gaussian distribution, the error term is assumed to be asymmetric Laplace distributed which increases the flexibility and robustness of the SIM. To automatically identify truly active predictors, spike-and-slab priors are used for Bayesian variable selection. Posterior computations are performed via a Metropolis-Hastings-within-Gibbs sampler using a truncation-based algorithm for stick-breaking priors. We compare the efficiency of the proposed approach with well-established techniques in an extensive simulation study and illustrate its practical performance by an application to nonparametric modelling of the power consumption in a sewage treatment plant. 相似文献
24.
Analysis of massive datasets is challenging owing to limitations of computer primary memory. Composite quantile regression (CQR) is a robust and efficient estimation method. In this paper, we extend CQR to massive datasets and propose a divide-and-conquer CQR method. The basic idea is to split the entire dataset into several blocks, applying the CQR method for data in each block, and finally combining these regression results via weighted average. The proposed approach significantly reduces the required amount of primary memory, and the resulting estimate will be as efficient as if the entire data set is analysed simultaneously. Moreover, to improve the efficiency of CQR, we propose a weighted CQR estimation approach. To achieve sparsity with high-dimensional covariates, we develop a variable selection procedure to select significant parametric components and prove the method possessing the oracle property. Both simulations and data analysis are conducted to illustrate the finite sample performance of the proposed methods. 相似文献
25.
In this article, to reduce computational load in performing Bayesian variable selection, we used a variant of reversible jump Markov chain Monte Carlo methods, and the Holmes and Held (HH) algorithm, to sample model index variables in logistic mixed models involving a large number of explanatory variables. Furthermore, we proposed a simple proposal distribution for model index variables, and used a simulation study and real example to compare the performance of the HH algorithm with our proposed and existing proposal distributions. The results show that the HH algorithm with our proposed proposal distribution is a computationally efficient and reliable selection method. 相似文献
26.
We consider the estimation of the conditional hazard function of a scalar response variable Y given a Hilbertian random variable X when the observations are linked via a single-index structure in the quasi-associated framework. We establish the pointwise almost complete convergence and the uniform almost complete convergence (with the rate) of the estimate of this model. A simulation is given to illustrate the good behavior in the practice of our methodology. 相似文献
27.
《Journal of Statistical Computation and Simulation》2012,82(6):1102-1116
In this paper, a variable repetitive group sampling plans based on one-sided process capability indices is proposed to deal with lot sentencing for one-sided specifications. The parameters of the proposed plans are tabulated for some combinations of acceptance quality levels with commonly used producer's risk and consumer's risk. The efficiency of the proposed plan is compared with the Pearn and Wu [Critical acceptance values and sample sizes of a variables sampling plan for very low fraction of defectives. Omega – Int J Manag Sci. 2006;34(1):90–101] plan in terms of sample size and the power curve. One example is given to illustrate the proposed methodology. 相似文献
28.
Housila P. Singh 《统计学通讯:理论与方法》2017,46(8):3957-3984
This paper addresses the problem of estimating a general parameter using information on an auxiliary variable X. We have suggested a class of exponential-type ratio estimators for the parameter and its properties are studied. It is identified that the estimators due to Upadhyaya et al. [Journal of Statistical Theory and Practice (2011), 5(2), 285–302] and Yadav and Kadilar [Revista Columbiana de Estadistica, (2013), 36(1), 145–152] are members of the proposed estimator. We have also shown that the suggested estimator is more efficient than the estimators of Upadhyaya et al. (2011) and Yadav and Kadilar (2013). Numerical illustration is provided in support of the present study. 相似文献
29.
Testing for bioequivalence of highly variable drugs from TR‐RT crossover designs with heterogeneous residual variances 下载免费PDF全文
Traditional bioavailability studies assess average bioequivalence (ABE) between the test (T) and reference (R) products under the crossover design with TR and RT sequences. With highly variable (HV) drugs whose intrasubject coefficient of variation in pharmacokinetic measures is 30% or greater, assertion of ABE becomes difficult due to the large sample sizes needed to achieve adequate power. In 2011, the FDA adopted a more relaxed, yet complex, ABE criterion and supplied a procedure to assess this criterion exclusively under TRR‐RTR‐RRT and TRTR‐RTRT designs. However, designs with more than 2 periods are not always feasible. This present work investigates how to evaluate HV drugs under TR‐RT designs. A mixed model with heterogeneous residual variances is used to fit data from TR‐RT designs. Under the assumption of zero subject‐by‐formulation interaction, this basic model is comparable to the FDA‐recommended model for TRR‐RTR‐RRT and TRTR‐RTRT designs, suggesting the conceptual plausibility of our approach. To overcome the distributional dependency among summary statistics of model parameters, we develop statistical tests via the generalized pivotal quantity (GPQ). A real‐world data example is given to illustrate the utility of the resulting procedures. Our simulation study identifies a GPQ‐based testing procedure that evaluates HV drugs under practical TR‐RT designs with desirable type I error rate and reasonable power. In comparison to the FDA's approach, this GPQ‐based procedure gives similar performance when the product's intersubject standard deviation is low (≤0.4) and is most useful when practical considerations restrict the crossover design to 2 periods. 相似文献
30.
In this paper, we develop a new estimation procedure based on quantile regression for semiparametric partially linear varying-coefficient models. The proposed estimation approach is empirically shown to be much more efficient than the popular least squares estimation method for non-normal error distributions, and almost not lose any efficiency for normal errors. Asymptotic normalities of the proposed estimators for both the parametric and nonparametric parts are established. To achieve sparsity when there exist irrelevant variables in the model, two variable selection procedures based on adaptive penalty are developed to select important parametric covariates as well as significant nonparametric functions. Moreover, both these two variable selection procedures are demonstrated to enjoy the oracle property under some regularity conditions. Some Monte Carlo simulations are conducted to assess the finite sample performance of the proposed estimators, and a real-data example is used to illustrate the application of the proposed methods. 相似文献