In java, we have a src folder then packages within them. these packages contain classes. How are projects organized in C#?
there is no proper structure, its whatever your company/you says it is
Kinda this, there is an overall structure for monolith stuff, eg. one could argue you have to separate service, model, dto/view layers and manage proj references so that they cascade as expected.
I usually have:


Solution
– Project for the business layer which contains my services, associated interfaces and global object models or POCOs that are used project wide. Also if using EF this contains my mapping profiles.
– Project for data layer which contains data repositories and associated interfaces and the db context files as well as entity objects used for EF.
– Project for global files such as global strings resx files.
– Project for utilities, this includes things like enums, constants and helper files(custom helpers, input validation etc.)
– Project for each front-end (desktop/web/mobile), contains views/viewmodels/controllers or whatever files are associated with each front-end framework I am using
Also included in the solution is a project for unit tests.
These are listed in order of hierachy which is important. The higher up the project, the less things it depends on. This follows onion architecture model. This allows me to swap out GUIs with basically no code change at all and re-use back-ends. It is also clear I normally use a repo/service design pattern approach.
Also of course using dependency injection with appropriate interfaces defined.
Your business layer contains your EF mapping profiles? Why would the business layer even need to know you’re using EF?
Here’s an example
https://github.com/jasontaylordev/CleanArchitecture
How you organize it is really up to you. What works for a single Hello World project doesn’t fit a solution with a 100+ projects.
Packages in .Net use Nuget. The project contains a reference to the package, which is then download to your local PC. It is cached in c:users[username].nuget.
Don’t add packages to source control.
Another good example project is this one, showing the same clean architecture but from Microsoft themselves: https://github.com/dotnet-architecture/eShopOnWeb
Hello, is Nuget what is used to download dependencies?
Then again if you have a solution with 100 projects I would have made it a microservice. I prefer to keep one project per API and using vertical slices with feature folders.
I think this may be what you are looking for. These are the most up to date guidelines I could find for project structure in c#.
https://gist.github.com/davidfowl/ed7564297c61fe9ab814

The sad truth is whatever structure you have someone will tell you it is not right.
But in all seriousness, keep it organized and consistent is my advice.
I’ve adopted the $ROOT/{src,tests} pattern for structuring projects. Things I drop in $ROOT are:
solution file
Dockerfile, .dockerignore & docker-compose.yaml (if applicable)
if there’s a docker component, I’ll also drop a Directory.Build.props file that instructs the dotnet compiler to redirect the bin and obj files (also useful if you’re sometimes building with WSL)
.gitignore, .editorconfig, README, LICENSE as necessary (I omit the license for work projects)
nuget.config if necessary (use it mostly at work since we use a private repository
And then a few other directories depending on what I’m building, for a package I’d probably skip all of these:
scripts: pretty much anything necessary to either launch, build, test or deploy the project; I’ll drop a local subdirectory for scripts that are used locally only
terraform, helm: deployment stuff
For a solution that has multiple deployables (eg any combination of an API, queue worker, cli tool, etc), I’ll divide up the src directory like:
src/My.Project.Api
src/My.Project.QueueWorker
src/My.Project.Core (shared code)
For tests, I tend to just a single test project as it makes coverage collection easier. If it’s really necessary to divide them up, I’ll name them the same as the project and end the name with .Tests
I’ll also divide the terraform and helm directories up like the src directory:
{terraform,helm}/My.Project.Api
{terraform,helm}/My.Project.QueueWorker
etc
And then I can pass the project name to the deploy scripts and that handles finding the right stuff. This also has the effect of making it really, really easy to find what infrastructure goes with what project.
I’ll also use just 1 Dockerfile for the entire solution with a script that’ll launch the correct deployable. I end up with a bigger image at the end, but I use fewer images. Up to you if this is worth while. The biggest issue I’ve seen is that a queue worker ends up with some unnecessary runtime stuff if it’s packaged with an API.
The the docker-compose, I make very, very liberal use of volume mounts to only mount what’s necessary for each runtime (this way there’s not a bunch of unnecessarily triggered watch recompiles).
Just slap all the classes in one file and hope for the best, said the person responsible for the bit of software I’m working on. They also said, “slap all the logic into one big method and hope for the best”.
Thats fucked lol makes me shiver
Thats a different interpretation of single purpose.
There is no proper structure. Ever team, client and project I have worked on has structured things differently. I would say what’s important is trying to ensure that all of the projects a dev is likely to work on follow a similar pattern. Other teams may work differently, which is fine. But that other team hopefully aims for consistency between its projects in the same way, so once you’ve worked on one, you can work on them all.
This is a harder problem than people respect, but the first thing you need to do is start by thinking about what depends on what and that will dictate how many libraries/directories you need. I generally just let Visual Studio do its thing in C# which it prefers a flat file structure where every project gets its own directory. I then just focus on what projects I need.
Having a presentation layer, business layer and a data layer all in their own library projects is a good first rudimentary way to break out dependencies. Then after, I’d look in to Domain Driven design.

null reference exceptions

source