首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1018篇
  免费   46篇
  国内免费   1篇
管理学   45篇
民族学   1篇
人口学   9篇
丛书文集   16篇
理论方法论   9篇
综合类   68篇
社会学   25篇
统计学   892篇
  2023年   6篇
  2022年   6篇
  2021年   14篇
  2020年   28篇
  2019年   42篇
  2018年   37篇
  2017年   65篇
  2016年   31篇
  2015年   33篇
  2014年   19篇
  2013年   349篇
  2012年   87篇
  2011年   31篇
  2010年   28篇
  2009年   30篇
  2008年   36篇
  2007年   28篇
  2006年   18篇
  2005年   25篇
  2004年   19篇
  2003年   13篇
  2002年   17篇
  2001年   18篇
  2000年   14篇
  1999年   3篇
  1998年   8篇
  1997年   4篇
  1996年   6篇
  1995年   1篇
  1994年   2篇
  1993年   4篇
  1992年   4篇
  1991年   4篇
  1990年   4篇
  1989年   6篇
  1988年   4篇
  1986年   1篇
  1985年   1篇
  1984年   4篇
  1983年   4篇
  1982年   1篇
  1981年   2篇
  1978年   2篇
  1977年   2篇
  1976年   1篇
  1975年   3篇
排序方式: 共有1065条查询结果,搜索用时 15 毫秒
1.
When a candidate predictive marker is available, but evidence on its predictive ability is not sufficiently reliable, all‐comers trials with marker stratification are frequently conducted. We propose a framework for planning and evaluating prospective testing strategies in confirmatory, phase III marker‐stratified clinical trials based on a natural assumption on heterogeneity of treatment effects across marker‐defined subpopulations, where weak rather than strong control is permitted for multiple population tests. For phase III marker‐stratified trials, it is expected that treatment efficacy is established in a particular patient population, possibly in a marker‐defined subpopulation, and that the marker accuracy is assessed when the marker is used to restrict the indication or labelling of the treatment to a marker‐based subpopulation, ie, assessment of the clinical validity of the marker. In this paper, we develop statistical testing strategies based on criteria that are explicitly designated to the marker assessment, including those examining treatment effects in marker‐negative patients. As existing and developed statistical testing strategies can assert treatment efficacy for either the overall patient population or the marker‐positive subpopulation, we also develop criteria for evaluating the operating characteristics of the statistical testing strategies based on the probabilities of asserting treatment efficacy across marker subpopulations. Numerical evaluations to compare the statistical testing strategies based on the developed criteria are provided.  相似文献   
2.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size.  相似文献   
3.
Abstract

This paper focuses on the inference of suitable generally non linear functions in stochastic volatility models. In this context, in order to estimate the variance of the proposed estimators, a moving block bootstrap (MBB) approach is suggested and discussed. Under mild assumptions, we show that the MBB procedure is weakly consistent. Moreover, a methodology to choose the optimal length block in the MBB is proposed. Some examples and simulations on the model are also made to show the performance of the proposed procedure.  相似文献   
4.
Bioequivalence (BE) studies are designed to show that two formulations of one drug are equivalent and they play an important role in drug development. When in a design stage, it is possible that there is a high degree of uncertainty on variability of the formulations and the actual performance of the test versus reference formulation. Therefore, an interim look may be desirable to stop the study if there is no chance of claiming BE at the end (futility), or claim BE if evidence is sufficient (efficacy), or adjust the sample size. Sequential design approaches specially for BE studies have been proposed previously in publications. We applied modification to the existing methods focusing on simplified multiplicity adjustment and futility stopping. We name our method modified sequential design for BE studies (MSDBE). Simulation results demonstrate comparable performance between MSDBE and the original published methods while MSDBE offers more transparency and better applicability. The R package MSDBE is available at https://sites.google.com/site/modsdbe/ . Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
5.
In this paper, the task of determining expected values of sample moments, where the sample members have been selected based on noisy information, is considered. This task is a recurring problem in the theory of evolution strategies. Exact expressions for expected values of sums of products of concomitants of selected order statistics are derived. Then, using Edgeworth and Cornish-Fisher approximations, explicit results that depend on coefficients that can be determined numerically are obtained. While the results are exact only for normal populations, it is shown experimentally that including skewness and kurtosis in the calculations can yield greatly improved results for other distributions.  相似文献   
6.
Sample selection in radiocarbon dating   总被引:1,自引:0,他引:1  
Archaeologists working on the island of O'ahu, Hawai'i, use radiocarbon dating of samples of organic matter found trapped in fish-pond sediments to help them to learn about the chronology of the construction and use of the aquicultural systems created by the Polynesians. At one particular site, Loko Kuwili, 25 organic samples were obtained and funds were available to date an initial nine. However, on calibration to the calendar scale, the radiocarbon determinations provided date estimates that had very large variances. As a result, major issues of chronology remained unresolved and the archaeologists were faced with the prospect of another expensive programme of radiocarbon dating. This paper presents results of research that tackles the problems associated with selecting samples from those which are still available. Building on considerable recent research that utilizes Markov chain Monte Carlo methods to aid archaeologists in their radiocarbon calibration and interpretation, we adopt the standard Bayesian framework of risk functions, which allows us to assess the optimal samples to be sent for dating. Although rather computer intensive, our algorithms are simple to implement within the Bayesian radiocarbon framework that is already in place and produce results that are capable of direct interpretation by the archaeologists. By dating just three more samples from Loko Kuwili the expected variance on the date of greatest interest could be substantially reduced.  相似文献   
7.
我国上市公司独立审计质量的博弈模型刻画及其分析   总被引:2,自引:0,他引:2  
本文分析了我国上市公司独立审计中会计师事务所、公司管理当局、独立董事、监管部门在审计行为中的博弈关系,应用博弈经济理论研究了在信息不对称条件下四方互动博弈策略的选择,并针对多方博弈的影响因素提出了提高独立审计质量的策略,以期对改进我国上市公司独立审计质量有所裨益.  相似文献   
8.
This paper assesses the performance of common estimators adjusting for differences in covariates, such as matching and regression, when faced with the so-called common support problems. It also shows how different procedures suggested in the literature affect the properties of such estimators. Based on an empirical Monte Carlo simulation design, a lack of common support is found to increase the root-mean-squared error of all investigated parametric and semiparametric estimators. Dropping observations that are off support usually improves their performance, although the magnitude of the improvement depends on the particular method used.  相似文献   
9.
Missing data are often problematic when analyzing complete longitudinal social network data. We review approaches for accommodating missing data when analyzing longitudinal network data with stochastic actor-based models. One common practice is to restrict analyses to participants observed at most or all time points, to achieve model convergence. We propose and evaluate an alternative, more inclusive approach to sub-setting and analyzing longitudinal network data, using data from a school friendship network observed at four waves (N = 694). Compared to standard practices, our approach retained more information from partially observed participants, generated a more representative analytic sample, and led to less biased model estimates for this case study. The implications and potential applications for longitudinal network analysis are discussed.  相似文献   
10.
The close relationship between quality and maintenance of manufacturing systems has contributed to the development of integrated models which use the concept of statistical process control (SPC) and maintenance. This article demonstrates the integration of the Shewhart individual-residual (ZX ? Ze) joint control chart and maintenance for two-stage dependent processes by jointly optimizing their policies to minimize the expected total costs associated with quality, maintenance and inspection. To evaluate the effectiveness of the proposed model, two stand-alone models—a maintenance model and an SPC model—are proposed. Then a numerical example is given to illustrate the application of the proposed integrated model. The results show that the integrated model outperforms the two stand-alone models with regard to the expected cost per unit time. Finally, a sensitivity analysis is conducted to develop insights into time parameters and cost parameters that influence the integration efforts.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号