首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   546篇
  免费   6篇
管理学   16篇
人口学   3篇
丛书文集   8篇
理论方法论   5篇
综合类   27篇
社会学   15篇
统计学   478篇
  2023年   2篇
  2021年   3篇
  2020年   5篇
  2019年   16篇
  2018年   19篇
  2017年   28篇
  2016年   7篇
  2015年   11篇
  2014年   33篇
  2013年   192篇
  2012年   43篇
  2011年   17篇
  2010年   13篇
  2009年   12篇
  2008年   23篇
  2007年   22篇
  2006年   12篇
  2005年   9篇
  2004年   8篇
  2003年   3篇
  2002年   3篇
  2001年   8篇
  2000年   5篇
  1999年   9篇
  1998年   4篇
  1997年   5篇
  1996年   4篇
  1995年   2篇
  1993年   3篇
  1992年   2篇
  1990年   1篇
  1989年   3篇
  1988年   4篇
  1985年   2篇
  1984年   4篇
  1983年   2篇
  1982年   3篇
  1980年   1篇
  1979年   2篇
  1978年   5篇
  1977年   1篇
  1975年   1篇
排序方式: 共有552条查询结果,搜索用时 15 毫秒
1.
From the economical viewpoint in reliability theory, this paper addresses a scheduling replacement problem for a single operating system which works at random times for multiple jobs. The system is subject to stochastic failure which results the imperfect maintenance activity based on some random failure mechanism: minimal repair due to type-I (repairable) failure, or corrective replacement due to type-II (non-repairable) failure. Three scheduling models for the system with multiple jobs are considered: a single work, N tandem works, and N parallel works. To control the deterioration process, the preventive replacement is planned to undergo at a scheduling time T or the job's completion time of for each model. The objective is to determine the optimal scheduling parameters (T* or N*) that minimizes the mean cost rate function in a finite time horizon for each model. A numerical example is provided to illustrate the proposed analytical model. Because the framework and analysis are general, the proposed models extend several existing results.  相似文献   
2.
贺建风  李宏煜 《统计研究》2021,38(4):131-144
数字经济时代,社交网络作为数字化平台经济的重要载体,受到了国内外学者的广泛关注。大数据背景下,社交网络的商业应用价值巨大,但由于其网络规模空前庞大,传统的网络分析方法 因计算成本过高而不再适用。而通过网络抽样算法获取样本网络,再推断整体网络,可节约计算资源, 因此抽样算法的好坏将直接影响社交网络分析结论的准确性。现有社交网络抽样算法存在忽略网络内部拓扑结构、容易陷入局部网络、抽样效率过低等缺陷。为了弥补现有社交网络抽样算法的缺陷,本文结合大数据社交网络的社区特征,提出了一种聚类随机游走抽样算法。该方法首先使用社区聚类算法将原始网络节点进行社区划分,得到多个社区网络,然后分别对每个社区进行随机游走抽样获取样本网 络。数值模拟和案例应用的结果均表明,聚类随机游走抽样算法克服了传统网络抽样算法的缺点,能够在降低网络规模的同时较好地保留原始网络的结构特征。此外,该抽样算法还可以并行运算,有效提升抽样效率,对于大数据背景下大规模社交网络的抽样实践具有重大现实意义。  相似文献   
3.
Summary.  We detail a general method for measuring agreement between two statistics. An application is two ratios of directly standardized rates which differ only by the choice of the standard. If the statistics have a high value for the coefficient of agreement then the expected squared difference between the statistics is small relative to the variance of the average of the two statistics, and inferences vary little by changing statistics. The estimation of a coefficient of agreement between two statistics is not straightforward because there is only one pair of observed values, each statistic calculated from the data. We introduce estimators of the coefficient of agreement for two statistics and discuss their use, especially as applied to functions of standardized rates.  相似文献   
4.
Summary Meta-analyses of sets of clinical trials often combine risk differences from several 2×2 tables according to a random-effects model. The DerSimonian-Laird random-effects procedure, widely used for estimating the populaton mean risk difference, weights the risk difference from each primary study inversely proportional to an estimate of its variance (the sum of the between-study variance and the conditional within-study variance). Because those weights are not independent of the risk differences, however, the procedure sometimes exhibits bias and unnatural behavior. The present paper proposes a modified weighting scheme that uses the unconditional within-study variance to avoid this source of bias. The modified procedure has variance closer to that available from weighting by ideal weights when such weights are known. We studied the modified procedure in extensive simulation experiments using situations whose parameters resemble those of actual studies in medical research. For comparison we also included two unbiased procedures, the unweighted mean and a sample-size-weighted mean; their relative variability depends on the extent of heterogeneity among the primary studies. An example illustrates the application of the procedures to actual data and the differences among the results. This research was supported by Grant HS 05936 from the Agency for Health Care Policy and Research to Harvard University.  相似文献   
5.
Generalized additive models for location, scale and shape   总被引:10,自引:0,他引:10  
Summary.  A general class of statistical models for a univariate response variable is presented which we call the generalized additive model for location, scale and shape (GAMLSS). The model assumes independent observations of the response variable y given the parameters, the explanatory variables and the values of the random effects. The distribution for the response variable in the GAMLSS can be selected from a very general family of distributions including highly skew or kurtotic continuous and discrete distributions. The systematic part of the model is expanded to allow modelling not only of the mean (or location) but also of the other parameters of the distribution of y , as parametric and/or additive nonparametric (smooth) functions of explanatory variables and/or random-effects terms. Maximum (penalized) likelihood estimation is used to fit the (non)parametric models. A Newton–Raphson or Fisher scoring algorithm is used to maximize the (penalized) likelihood. The additive terms in the model are fitted by using a backfitting algorithm. Censored data are easily incorporated into the framework. Five data sets from different fields of application are analysed to emphasize the generality of the GAMLSS class of models.  相似文献   
6.
The components of a reliability system subjected to a common random environment usually have dependent lifetimes. This paper studies the stochastic properties of such a system with lifetimes of the components following multivariate frailty models and multivariate mixed proportional reversed hazard rate (PRHR) models, respectively. Through doing stochastic comparison, we devote to throwing a new light on how the random environment affects the number of working components of a reliability system and on assessing the performance of a k-out-of-n system.  相似文献   
7.
In many industrial quality control experiments and destructive stress testing, the only available data are successive minima (or maxima)i.e., record-breaking data. There are two sampling schemes used to collect record-breaking data: random sampling and inverse sampling. For random sampling, the total sample size is predetermined and the number of records is a random variable while in inverse-sampling the number of records to be observed is predetermined; thus the sample size is a random variable. The purpose of this papper is to determinevia simulations, which of the two schemes, if any, is more efficient. Since the two schemes are equivalent asymptotically, the simulations were carried out for small to moderate sized record-breaking samples. Simulated biases and mean square errors of the maximum likelihood estimators of the parameters using the two sampling schemes were compared. In general, it was found that if the estimators were well behaved, then there was no significant difference between the mean square errors of the estimates for the two schemes. However, for certain distributions described by both a shape and a scale parameter, random sampling led to estimators that were inconsistent. On the other hand, the estimated obtained from inverse sampling were always consistent. Moreover, for moderated sized record-breaking samples, the total sample size that needs to be observed is smaller for inverse sampling than for random sampling.  相似文献   
8.

In this article, the validity of procedures for testing the significance of the slope in quantitative linear models with one explanatory variable and first-order autoregressive [AR(1)] errors is analyzed in a Monte Carlo study conducted in the time domain. Two cases are considered for the regressor: fixed and trended versus random and AR(1). In addition to the classical t -test using the Ordinary Least Squares (OLS) estimator of the slope and its standard error, we consider seven t -tests with n-2\,\hbox{df} built on the Generalized Least Squares (GLS) estimator or an estimated GLS estimator, three variants of the classical t -test with different variances of the OLS estimator, two asymptotic tests built on the Maximum Likelihood (ML) estimator, the F -test for fixed effects based on the Restricted Maximum Likelihood (REML) estimator in the mixed-model approach, two t -tests with n - 2 df based on first differences (FD) and first-difference ratios (FDR), and four modified t -tests using various corrections of the number of degrees of freedom. The FDR t -test, the REML F -test and the modified t -test using Dutilleul's effective sample size are the most valid among the testing procedures that do not assume the complete knowledge of the covariance matrix of the errors. However, modified t -tests are not applicable and the FDR t -test suffers from a lack of power when the regressor is fixed and trended ( i.e. , FDR is the same as FD in this case when observations are equally spaced), whereas the REML algorithm fails to converge at small sample sizes. The classical t -test is valid when the regressor is fixed and trended and autocorrelation among errors is predominantly negative, and when the regressor is random and AR(1), like the errors, and autocorrelation is moderately negative or positive. We discuss the results graphically, in terms of the circularity condition defined in repeated measures ANOVA and of the effective sample size used in correlation analysis with autocorrelated sample data. An example with environmental data is presented.  相似文献   
9.
In this study, we propose an information measure of uncertainty associated with the random equilibrium residual lifetime of a system driven by N-State Random Evolution. A U-statistic test driven by a moment inequality is proposed for testing the hypothesis that the uncertainty of equilibrium remaining life of a system remains unchanged (when system is in the steady state) against the alternative situation when system’s equilibrium residual life has increasing uncertainty over time (i.e., the life distribution has Increasing Equilibrium Residual Entropy property). Some numerical results such as tabulated critical values and empirical power of the proposed test statistic are presented as well.  相似文献   
10.
This paper considers the estimation problem when lifetimes are Weibull distributed and are collected under a Type-II progressive censoring with random removals, where the number of units removed at each failure time follows a uniform discrete distribution. The expected time of this censoring plan is discussed and compared numerically to that under a Type II censoring without removal. Maximum likelihood estimator of the parameters and their asymptotic variances are derived.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号