首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   11021篇
  免费   250篇
  国内免费   109篇
管理学   528篇
劳动科学   3篇
民族学   83篇
人才学   1篇
人口学   94篇
丛书文集   928篇
理论方法论   303篇
综合类   5822篇
社会学   177篇
统计学   3441篇
  2024年   17篇
  2023年   35篇
  2022年   51篇
  2021年   86篇
  2020年   127篇
  2019年   165篇
  2018年   191篇
  2017年   330篇
  2016年   180篇
  2015年   239篇
  2014年   462篇
  2013年   1589篇
  2012年   788篇
  2011年   575篇
  2010年   501篇
  2009年   515篇
  2008年   562篇
  2007年   705篇
  2006年   667篇
  2005年   648篇
  2004年   531篇
  2003年   527篇
  2002年   437篇
  2001年   396篇
  2000年   242篇
  1999年   112篇
  1998年   94篇
  1997年   80篇
  1996年   65篇
  1995年   91篇
  1994年   63篇
  1993年   54篇
  1992年   54篇
  1991年   31篇
  1990年   21篇
  1989年   30篇
  1988年   28篇
  1987年   21篇
  1986年   8篇
  1985年   10篇
  1984年   16篇
  1983年   10篇
  1982年   4篇
  1981年   7篇
  1980年   5篇
  1979年   3篇
  1978年   3篇
  1977年   2篇
  1976年   1篇
  1975年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
1.
Abstract.  Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce a curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.  相似文献   
2.
Longitudinal data often contain missing observations, and it is in general difficult to justify particular missing data mechanisms, whether random or not, that may be hard to distinguish. The authors describe a likelihood‐based approach to estimating both the mean response and association parameters for longitudinal binary data with drop‐outs. They specify marginal and dependence structures as regression models which link the responses to the covariates. They illustrate their approach using a data set from the Waterloo Smoking Prevention Project They also report the results of simulation studies carried out to assess the performance of their technique under various circumstances.  相似文献   
3.
Silphids in urban forests: Diversity and function   总被引:2,自引:0,他引:2  
Wolf  Jordan M.  Gibbs  James P. 《Urban Ecosystems》2004,7(4):371-384
Many ecologists have examined the process of how urbanization reduces biological diversity but rarely have its ecological consequences been assessed. We studied forest-dwelling burying beetles (Coleoptera: Silphidae)—a guild of insects that requires carrion to complete their life cycles—along an urban-rural gradient of land use in Maryland. Our objective was to determine how forest fragmentation associated with urbanization affects (1) beetle community diversity and structure and (2) the ecological function provided by these insects, that is, decomposition of vertebrate carcasses. Forest fragmentation strongly reduced burying beetle diversity and abundance, and did so far more pervasively than urbanization of the surrounding landscape. The likelihood that beetles interred experimental baits was a direct, positive function of burying beetle diversity. We conclude that loss of burying beetle diversity resulting from forest fragmentation could have important ecological consequences in urban forests.  相似文献   
4.
Projecting losses associated with hurricanes is a complex and difficult undertaking that is wrought with uncertainties. Hurricane Charley, which struck southwest Florida on August 13, 2004, illustrates the uncertainty of forecasting damages from these storms. Due to shifts in the track and the rapid intensification of the storm, real-time estimates grew from 2 to 3 billion dollars in losses late on August 12 to a peak of 50 billion dollars for a brief time as the storm appeared to be headed for the Tampa Bay area. The storm hit the resort areas of Charlotte Harbor near Punta Gorda and then went on to Orlando in the central part of the state, with early poststorm estimates converging on a damage estimate in the 28 to 31 billion dollars range. Comparable damage to central Florida had not been seen since Hurricane Donna in 1960. The Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) has recognized the role of computer models in projecting losses from hurricanes. The FCHLPM established a professional team to perform onsite (confidential) audits of computer models developed by several different companies in the United States that seek to have their models approved for use in insurance rate filings in Florida. The team's members represent the fields of actuarial science, computer science, meteorology, statistics, and wind and structural engineering. An important part of the auditing process requires uncertainty and sensitivity analyses to be performed with the applicant's proprietary model. To influence future such analyses, an uncertainty and sensitivity analysis has been completed for loss projections arising from use of a Holland B parameter hurricane wind field model. Uncertainty analysis quantifies the expected percentage reduction in the uncertainty of wind speed and loss that is attributable to each of the input variables.  相似文献   
5.
The authors define a new semiparametric Archimedean copula family which has a flexible dependence structure. The generator of the family is a local interpolation of existing generators. It has locally‐defined dependence parameters. The authors present a penalized constrained least‐squares method to estimate and smooth these parameters. They illustrate the flexibility of their dependence model in a bi‐variate survival example.  相似文献   
6.
This note exhibits two independent random variables on integers, X1 and X2, such that neither X1 nor X2 has a generalized Poisson distribution, but X1 + X2 has. This contradicts statements made by Professor Consul in his recent book.  相似文献   
7.
The authors propose graphical and numerical methods for checking the adequacy of the logistic regression model for matched case‐control data. Their approach is based on the cumulative sum of residuals over the covariate or linear predictor. Under the assumed model, the cumulative residual process converges weakly to a centered Gaussian limit whose distribution can be approximated via computer simulation. The observed cumulative residual pattern can then be compared both visually and analytically to a certain number of simulated realizations of the approximate limiting process under the null hypothesis. The proposed techniques allow one to check the functional form of each covariate, the logistic link function as well as the overall model adequacy. The authors assess the performance of the proposed methods through simulation studies and illustrate them using data from a cardiovascular study.  相似文献   
8.
The benchmark dose (BMD) is an exposure level that would induce a small risk increase (BMR level) above the background. The BMD approach to deriving a reference dose for risk assessment of noncancer effects is advantageous in that the estimate of BMD is not restricted to experimental doses and utilizes most available dose-response information. To quantify statistical uncertainty of a BMD estimate, we often calculate and report its lower confidence limit (i.e., BMDL), and may even consider it as a more conservative alternative to BMD itself. Computation of BMDL may involve normal confidence limits to BMD in conjunction with the delta method. Therefore, factors, such as small sample size and nonlinearity in model parameters, can affect the performance of the delta method BMDL, and alternative methods are useful. In this article, we propose a bootstrap method to estimate BMDL utilizing a scheme that consists of a resampling of residuals after model fitting and a one-step formula for parameter estimation. We illustrate the method with clustered binary data from developmental toxicity experiments. Our analysis shows that with moderately elevated dose-response data, the distribution of BMD estimator tends to be left-skewed and bootstrap BMDL s are smaller than the delta method BMDL s on average, hence quantifying risk more conservatively. Statistically, the bootstrap BMDL quantifies the uncertainty of the true BMD more honestly than the delta method BMDL as its coverage probability is closer to the nominal level than that of delta method BMDL. We find that BMD and BMDL estimates are generally insensitive to model choices provided that the models fit the data comparably well near the region of BMD. Our analysis also suggests that, in the presence of a significant and moderately strong dose-response relationship, the developmental toxicity experiments under the standard protocol support dose-response assessment at 5% BMR for BMD and 95% confidence level for BMDL.  相似文献   
9.
Risks from exposure to contaminated land are often assessed with the aid of mathematical models. The current probabilistic approach is a considerable improvement on previous deterministic risk assessment practices, in that it attempts to characterize uncertainty and variability. However, some inputs continue to be assigned as precise numbers, while others are characterized as precise probability distributions. Such precision is hard to justify, and we show in this article how rounding errors and distribution assumptions can affect an exposure assessment. The outcome of traditional deterministic point estimates and Monte Carlo simulations were compared to probability bounds analyses. Assigning all scalars as imprecise numbers (intervals prescribed by significant digits) added uncertainty to the deterministic point estimate of about one order of magnitude. Similarly, representing probability distributions as probability boxes added several orders of magnitude to the uncertainty of the probabilistic estimate. This indicates that the size of the uncertainty in such assessments is actually much greater than currently reported. The article suggests that full disclosure of the uncertainty may facilitate decision making in opening up a negotiation window. In the risk analysis process, it is also an ethical obligation to clarify the boundary between the scientific and social domains.  相似文献   
10.
Current status data arise when the death of every subject in a study cannot be determined precisely, but is known only to have occurred before or after a random monitoring time. The authors discuss the analysis of such data under semiparametric linear transformation models for which they propose a general inference procedure based on estimating functions. They determine the properties of the estimates they propose for the regression parameters of the model and illustrate their technique using tumorigenicity data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号