全文获取类型
收费全文 | 5927篇 |
免费 | 226篇 |
国内免费 | 113篇 |
专业分类
管理学 | 387篇 |
民族学 | 13篇 |
人才学 | 1篇 |
人口学 | 109篇 |
丛书文集 | 239篇 |
理论方法论 | 128篇 |
综合类 | 1523篇 |
社会学 | 328篇 |
统计学 | 3538篇 |
出版年
2024年 | 15篇 |
2023年 | 99篇 |
2022年 | 127篇 |
2021年 | 145篇 |
2020年 | 176篇 |
2019年 | 265篇 |
2018年 | 312篇 |
2017年 | 380篇 |
2016年 | 267篇 |
2015年 | 223篇 |
2014年 | 305篇 |
2013年 | 1032篇 |
2012年 | 390篇 |
2011年 | 248篇 |
2010年 | 215篇 |
2009年 | 214篇 |
2008年 | 230篇 |
2007年 | 235篇 |
2006年 | 201篇 |
2005年 | 207篇 |
2004年 | 178篇 |
2003年 | 138篇 |
2002年 | 100篇 |
2001年 | 110篇 |
2000年 | 97篇 |
1999年 | 65篇 |
1998年 | 64篇 |
1997年 | 44篇 |
1996年 | 27篇 |
1995年 | 31篇 |
1994年 | 25篇 |
1993年 | 16篇 |
1992年 | 19篇 |
1991年 | 11篇 |
1990年 | 11篇 |
1989年 | 5篇 |
1988年 | 8篇 |
1987年 | 7篇 |
1986年 | 6篇 |
1985年 | 6篇 |
1984年 | 6篇 |
1983年 | 5篇 |
1980年 | 1篇 |
排序方式: 共有6266条查询结果,搜索用时 11 毫秒
1.
Olive Oil Consumption in Greece: A Microeconometric Analysis 总被引:2,自引:1,他引:1
Panagiotis Lazaridis 《Journal of Family and Economic Issues》2004,25(3):411-430
In this paper, the factors affecting at-home demand for three types of oils and fats in Greece, with emphasis on olive oil, are investigated using the linear approximation of the Almost Ideal Demand System and family budget survey data. To overcome the econometric problem created with the existence of zero expenditure, a generalization of the two-stage Heckman procedure is employed. In order to investigate the role of self-consumption, two different samples were used. The first includes all households; the second excludes those that acquire olive oil only from own production. According to the results, there are important differences in the first stage of the decision process between the two samples. Unlike the first stage, the second stage of the decision process found no important differences between the results for the two samples. 相似文献
2.
Maximum likelihood estimation and goodness-of-fit techniques are used within a competing risks framework to obtain maximum likelihood estimates of hazard, density, and survivor functions for randomly right-censored variables. Goodness-of- fit techniques are used to fit distributions to the crude lifetimes, which are used to obtain an estimate of the hazard function, which, in turn, is used to construct the survivor and density functions of the net lifetime of the variable of interest. If only one of the crude lifetimes can be adequately characterized by a parametric model, then semi-parametric estimates may be obtained using a maximum likelihood estimate of one crude lifetime and the empirical distribution function of the other. Simulation studies show that the survivor function estimates from crude lifetimes compare favourably with those given by the product-limit estimator when crude lifetimes are chosen correctly. Other advantages are discussed. 相似文献
3.
Cédric Béguin Beat Hulliger 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2004,167(2):275-294
Summary. As a part of the EUREDIT project new methods to detect multivariate outliers in incomplete survey data have been developed. These methods are the first to work with sampling weights and to be able to cope with missing values. Two of these methods are presented here. The epidemic algorithm simulates the propagation of a disease through a population and uses extreme infection times to find outlying observations. Transformed rank correlations are robust estimates of the centre and the scatter of the data. They use a geometric transformation that is based on the rank correlation matrix. The estimates are used to define a Mahalanobis distance that reveals outliers. The two methods are applied to a small data set and to one of the evaluation data sets of the EUREDIT project. 相似文献
4.
Tim Futing Liao 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2004,167(1):125-139
Summary. Social data often contain missing information. The problem is inevitably severe when analysing historical data. Conventionally, researchers analyse complete records only. Listwise deletion not only reduces the effective sample size but also may result in biased estimation, depending on the missingness mechanism. We analyse household types by using population registers from ancient China (618–907 AD) by comparing a simple classification, a latent class model of the complete data and a latent class model of the complete and partially missing data assuming four types of ignorable and non-ignorable missingness mechanisms. The findings show that either a frequency classification or a latent class analysis using the complete records only yielded biased estimates and incorrect conclusions in the presence of partially missing data of a non-ignorable mechanism. Although simply assuming ignorable or non-ignorable missing data produced consistently similarly higher estimates of the proportion of complex households, a specification of the relationship between the latent variable and the degree of missingness by a row effect uniform association model helped to capture the missingness mechanism better and improved the model fit. 相似文献
5.
Longitudinal data often contain missing observations, and it is in general difficult to justify particular missing data mechanisms, whether random or not, that may be hard to distinguish. The authors describe a likelihood‐based approach to estimating both the mean response and association parameters for longitudinal binary data with drop‐outs. They specify marginal and dependence structures as regression models which link the responses to the covariates. They illustrate their approach using a data set from the Waterloo Smoking Prevention Project They also report the results of simulation studies carried out to assess the performance of their technique under various circumstances. 相似文献
6.
Jerald F. Lawless 《Revue canadienne de statistique》2004,32(3):327-331
Oiler, Gomez & Calle (2004) give a constant sum condition for processes that generate interval‐censored lifetime data. They show that in models satisfying this condition, it is possible to estimate non‐parametrically the lifetime distribution based on a well‐known simplified likelihood. The author shows that this constant‐sum condition is equivalent to the existence of an observation process that is independent of lifetimes and which gives the same probability distribution for the observed data as the underlying true process. 相似文献
7.
The development of a Municipal Information System, or currently better known as a local spatial data infrastructure, is considered complex due to the required inter-institutional relationships. In many developing countries Geographical Information Systems (GISs) are introduced but the benefits are modest as no changes take place in technical and organisational structure of organisations. Digital databases and computer-aided design (CAD) maps are mushrooming in great variety within different private and public institutions, municipal organisations and even within single departments and with structures similar to the paper period and thus operating on a stand-alone basis.Many national mapping agencies are not able to provide large-scale digital urban base maps, while the absence or low quality of cadastres makes those basic core data sets unavailable or inaccessible. The result is that duplication and incompatible data are frequently observed and also donor-driven stand-alone projects have a limited impact through the lack of institutional embedding and are not able to mature from the project to the institutional level. However, a positive sign is that there is an increasing awareness among data producers and consumers that investments in the development of digital data sets should be combined to reduce costs and increase benefits from especially GIS, and information and communication technology (ICT) in general.Within Trujillo a long-term vision was developed to make full use of ICT and GIS to modernise all operations of the Municipality to increase the efficiency and effectiveness of its tasks. However, large investments are not feasible due to the very limited municipal budgets. To guarantee the support of the municipal council, short-term results are required. This paper describes three ‘products’ as part of the vision to develop through a step-by-step approach a local spatial data infrastructure for Trujillo.The three, rather different, products are:
- 1. fiscal cadastre, to increase municipal revenues through property taxation;
- 2. an ‘environmental atlas’ based on a compatible spatial and attribute data sets from a variety of organisations; and
- 3. a municipal website with interactive GIS and meta data information.
8.
Annual concentrations of toxic air contaminants are of primary concern from the perspective of chronic human exposure assessment and risk analysis. Despite recent advances in air quality monitoring technology, resource and technical constraints often impose limitations on the availability of a sufficient number of ambient concentration measurements for performing environmental risk analysis. Therefore, sample size limitations, representativeness of data, and uncertainties in the estimated annual mean concentration must be examined before performing quantitative risk analysis. In this paper, we discuss several factors that need to be considered in designing field-sampling programs for toxic air contaminants and in verifying compliance with environmental regulations. Specifically, we examine the behavior of SO2, TSP, and CO data as surrogates for toxic air contaminants and as examples of point source, area source, and line source-dominated pollutants, respectively, from the standpoint of sampling design. We demonstrate the use of bootstrap resampling method and normal theory in estimating the annual mean concentration and its 95% confidence bounds from limited sampling data, and illustrate the application of operating characteristic (OC) curves to determine optimum sample size and other sampling strategies. We also outline a statistical procedure, based on a one-sided t-test, that utilizes the sampled concentration data for evaluating whether a sampling site is compliance with relevant ambient guideline concentrations for toxic air contaminants. 相似文献
9.
The authors consider the optimal design of sampling schedules for binary sequence data. They propose an approach which allows a variety of goals to be reflected in the utility function by including deterministic sampling cost, a term related to prediction, and if relevant, a term related to learning about a treatment effect To this end, they use a nonparametric probability model relying on a minimal number of assumptions. They show how their assumption of partial exchangeability for the binary sequence of data allows the sampling distribution to be written as a mixture of homogeneous Markov chains of order k. The implementation follows the approach of Quintana & Müller (2004), which uses a Dirichlet process prior for the mixture. 相似文献
10.
We propose four different GMM estimators that allow almost consistent estimation of the structural parameters of panel probit models with fixed effects for the case of small Tand large N. The moments used are derived for each period from a first order approximation of the mean of the dependent variable conditional on explanatory variables and on the fixed effect. The estimators differ w.r.t. the choice of instruments and whether they use trimming to reduce the bias or not. In a Monte Carlo study, we compare these estimators with pooled probit and conditional logit estimators for different data generating processes. The results show that the proposed estimators outperform these competitors in several situations. 相似文献